Search Results: "tca"

13 October 2021

Dirk Eddelbuettel: RcppQuantuccia 0.0.4 on CRAN: Updated Calendar

A new release of RcppQuantuccia arrived on CRAN earlier today. RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions. This release is the first in two years and brings a few internal updates (such as a swift to continuous integration to the trusted r-ci setup) along with a first update of the United States calendar. Which, just like RQuantLib, now knows about two new calendars LiborUpdate and FederalReserve. So now we can for example look for holidays during June of next year under the Federal Reserve calendar and see
> library(RcppQuantuccia)
> setCalendar("UnitedStates/FederalReserve")
> getHolidays(as.Date("2022-06-01"), as.Date("2022-06-30"))
[1] "2022-06-20"
> 
that Juneteenth 2022 will be observed on (Monday) June 20th. We should note that Quantuccia itself was a bit of a trial balloon and is not actively maintained so we may concentrate on these calendaring functions to keep them in sync with QuantLib. Being a header-only subset is good, and the removal of the (very !!) expensive (in terms of compiled library size) Sobol sequence-based RNG in release 0.0.3 was the right call. So time permitting, a leaner, meaner RcppQuantuccia with a calendaring focus may emerge. The complete list changes follows.

Changes in version 0.0.4 (2021-10-12)
  • Allow for 'Null' calendar without weekends or holidays
  • Switch CI use to r-ci
  • Updated UnitedStates calendar to current QuantLib calendar
  • Small updates to DESCRIPTION and README.md

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 August 2021

Mike Gabriel: Chromium Policies Managed under Linux

For a customer project, I recently needed to take a closer look at best strategies of deploying Chromium settings to thrillions of client machines in a corporate network. Unfortunately, the information on how to deploy site-wide Chromium browser policies are a little scattered over the internet and the intertwining of Chromium preferences and Chromium policies required deeper introspection. Here, I'd like to provide the result of that research, namely a list of references that has been studied before setting up Chromium policies for the customer's proof-of-concept. Difference between Preferences and Policies Chromium can be controlled via preferences (mainly user preferences) and administratively rolled-out policy files. The difference between preferences and policies are explained here:
https://www.chromium.org/administrators/configuring-other-preferences The site-admin (or distro package maintainer) can pre-configure the user's Chromium experience via a master preferences file (/etc/chromium/master_preferences). This master preferences file is the template for the user's preferences file and gets copied over into the Chromium user profile folder on first browser start. Note: By studying the recent Chromium code it was found out that /etc/chromium/master_preferences is the legacy filename of the initial preferences file. The new filename is /etc/chromium/initial_preferences. We will continue with master_preferences here as most Linux distributions still provide the initial preferences via this file. Whereas the new filename is already supported by Chromium in openSUSE/SLES, it is not yet support by Chromium in Debian/Ubuntu. (See Debian bug #992178). Difference of 'managed' and 'recommended' Policies The difference between 'managed' and 'recommended' Chromium policies is explained here:
https://www.chromium.org/administrators/configuring-other-preferences Quoting from above URL (last visited 2021/08): Policies that should be editable by the user are called "recommended policies" and offer a better alternative than the master_preferences file. Their contents can be changed and are respected as long as the user has not modified the value of that preference themselves. So, policies of type 'managed' override user preferences (and also lock them in the Chromium settings UI). Those 'managed' policies are good for enforcing browser settings. They can be blended in also for existing browser user profiles. Policies ('managed' and 'recommended') even get blended it at browser run-time when modified. Use case: e.g. for rolling out browser security settings that are required for enforcing a site-policy-compliant browser user configuration. Policies of type 'recommended' have an impact on setting defaults of the Chromium browser. They apply to already existing browser profiles, if the user hasn't tweaked with the to-be-recommended settings, yet. Also, they get applied at browser run-time. However, if the user has already fiddled with such a to-be-recommended setting via the Chromium settings UI, the user choice takes precedence over the recommended policy. Use case: Policies of type 'recommended' are good for long-term adjustments to browser configuration options. Esp. if users don't touch their browser settings much, 'recommended' policies are a good approach for fine-tuning site-wide browser settings on user machines. CAVEAT: While researching on this topic, two problematic observations were made:
  1. All setting parameters put into the master preferences file (/etc/chromium/master_preferences) can't be superceded by 'recommended' Chromium policies. Pre-configured preferences are handled as if the user has already tinkered with those preferences in Chromium's settings UI. It also was discovered, that distributors tend to overload /etc/chromium/master_preferences with their best practice browser settings. Everything that is not required on first browser start should be provided as 'recommended' policies, already in the distribution packages for Chromium .
  2. There does not seem to be an elegant way to override the package maintainer's choice of options in /etc/chromium/master_preferences file via some file drop-in replacement. (See Debian bug #992179). So, deploying Chromium involves post-install config file tinkering by hand, by script or by config management tools. There is space for improvement here.
Managing Chromium Policy with Files Chromium supports 'managed' policies and 'recommended' policies. Policies get deployed as JSON files. For Linux, this is explained here:
https://www.chromium.org/administrators/linux-quick-start Note, that for Chromium, the policy files have to be placed into /etc/chromium. The example on the above web page shows where to place them for Google Chrome. Good 'How to Get Started' Documentation for Chromium Policy Setups This overview page provides a good get-started-documentation on how to provision Chromium via policies:
https://www.chromium.org/administrators/configuring-policy-for-extensions First-Run Preferences It seems, not every setting can be tweaked via a Chromium policy. Esp. the first-run preferences are affected by this:
https://www.chromium.org/developers/design-documents/first-run-customiza... So, for tweaking the first-run settings, one needs to adjust /etc/chromium/master_prefences (which is suboptimal, again see Debian bug #992179 for a detailed explanation on why this is suboptimal). The required adjustments to master_preferences can be achieved with the jq command line tool, here is one example:
# Tweak chromium's /etc/chromium/master_preferences file.
# First change: drop everything that can be provisioned via Chromium Policies.
# Rest of the changes: Adjust preferences for new users to our needs for all
# parameters that cannot be provisioned via Chromium Policies.
cat /etc/chromium/master_preferences   \
    jq 'del(.browser.show_home_button, .browser.check_default_browser, .homepage)'  
    jq '.first_run_tabs=[ "https://first-run.example.com/", "https://your-admin-faq.example.com" ]'  
    jq '.default_apps="noinstall"'  
    jq '.credentials_enable_service=false   .credentials_enable_autosignin=false'  
    jq '.search.suggest_enabled=false'  
    jq '.distribution.import_bookmarks=false   .distribution.verbose_logging=false   .distribution.skip_first_run_ui=true'  
    jq '.distribution.create_all_shortcuts=true   .distribution.suppress_first_run_default_browser_prompt=true'  
    cat > /etc/chromium/master_preferences.adapted
if [ -n "/etc/chromium/master_preferences.adapted" ]; then
        mv /etc/chromium/master_preferences.adapted /etc/chromium/master_preferences
else
        echo "WARNING (chromium tweaks): The file /etc/chromium/master_preferences.adapted was empty after tweaking."
        echo "                           Leaving /etc/chromium/master_preferences untouched."
fi
The list of available (first-run and other) initial preferences can be found in Chromium's pref_names.cc code file:
https://github.com/chromium/chromium/blob/main/chrome/common/pref_names.cc List of Available Chromium Policies The list of available Chromium policies used to be maintained in the Chromium wiki: https://www.chromium.org/administrators/policy-list-3 However, that page these days redirects to the Google Chrome Enterprise documentation:
https://chromeenterprise.google/policies/ Each policy variable has its own documentation page there. Please note the "Supported Features" section for each policy item. There, you can see, if the policy supports being placed into "recommended" and/or "managed". This is an example /etc/chromium/policies/managed/50_browser-security.json file (note that all kinds of filenames are allowed, even files without .json suffix):
 
  "HideWebStoreIcon": true,
  "DefaultBrowserSettingEnabled": false,
  "AlternateErrorPagesEnabled": false,
  "AutofillAddressEnabled": false,
  "AutofillCreditCardEnabled": false,
  "NetworkPredictionOptions": 2,
  "SafeBrowsingProtectionLevel": 0,
  "PaymentMethodQueryEnabled": false,
  "BrowserSignin": false,
 
And this is an example /etc/chromium/policies/recommended/50_homepage.json file:
 
  "ShowHomeButton": true,
  "WelcomePageOnOSUpgradeEnabled": false,
  "HomepageLocation": "https://www.example.com"
 
And for defining a custom search provider, I use /etc/chromium/policies/recommended/60_searchprovider.json (here, I recommend not using DuckDuckGo as DefaultSearchProviderName, but some custom name; unfortunately, I did not find a policy parameter that simply selects an already existing search provider name as the default :-( ):
 
  "DefaultSearchProviderEnabled": true,
  "DefaultSearchProviderName": "DuckDuckGo used by Example.com",
  "DefaultSearchProviderIconURL": "https://duckduckgo.com/favicon.ico",
  "DefaultSearchProviderEncodings": ["UTF-8"],
  "DefaultSearchProviderSearchURL": "https://duckduckgo.com/?q= searchTerms ",
  "DefaultSearchProviderSuggestURL": "https://duckduckgo.com/ac/?q= searchTerms &type=list",
  "DefaultSearchProviderNewTabURL": "https://duckduckgo.com/chrome_newtab"
 
The Essence and Recommendations On first startup, Chromium copies /etc/chromium/master_preferences to $HOME/.config/chromium/Default/Preferences. It does this only if the Chromium user profile has'nt been created, yet. So, settings put into master_preferences by the distro and the site or device admin are one-time-shot preferences (new user logs into a device, preferences get applied on first start of Chromium). Chromium policy files, however, get continuously applied at browser runtime. Chromium watches its policy files and you can observe Chromium settings change when policy files get modified. So, for continuously provisioning site-wide settings that mostly always trickle into the user's browser configuration, Chromium policies should definitely be preferred over master_preferences and this should be the approach to take. When using Chromium policies, one needs to take into account that settings in /etc/chromium/master_preferences seem to have precedence over 'recommended' policies. So, settings that you want to deploy as recommended policies must be removed from /etc/chromium/master_preferences. Essentially, these are the recommendations extracted from all the above research and information for deploying Chromium on enterprise scale:
  1. Everything that's required at first-run should go into /etc/chromium/master_preferences.
  2. Everything that's not required at first-run should be removed from /etc/chromium/master_preferences.
  3. Everything that's deployable as a Chromium policy should be deployed as a policy (as you can influence existing browser sessions with that, also long-term)
  4. Chromium policy files should be split up into several files. Chromium parses those files in alpha-numerical order. If policies occur more than once, the last policy being parsed takes precedence.
Feedback If you have any feedback or input on this post, I'd be happy to hear it. Please get in touch via the various channels where I am known as sunweaver (OFTC and libera.chat IRC, [matrix], Mastodon, E-Mail at debian.org, etc.). Looking forward to hearing from you. Thanks! light+love
Mike Gabriel (aka sunweaver)

30 June 2021

Enrico Zini: Systemd containers with unittest

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. Unit testing some parts of Transilience, like the apt and systemd actions, or remote Mitogen connections, can really use a containerized system for testing. To have that, I reused my work on nspawn-runner. to build a simple and very fast system of ephemeral containers, with minimal dependencies, based on systemd-nspawn and btrfs snapshots: Setup To be able to use systemd-nspawn --ephemeral, the chroots needs to be btrfs subvolumes. If you are not running on a btrfs filesystem, you can create one to run the tests, even on a file:
fallocate -l 1.5G testfile
/usr/sbin/mkfs.btrfs testfile
sudo mount -o loop testfile test_chroots/
I created a script to setup the test environment, here is an extract:
mkdir -p test_chroots
cat << EOF > "test_chroots/CACHEDIR.TAG"
Signature: 8a477f597d28d172789f06886806bc55
# chroots used for testing transilience, can be regenerated with make-test-chroot
EOF
btrfs subvolume create test_chroots/buster
eatmydata debootstrap --variant=minbase --include=python3,dbus,systemd buster test_chroots/buster
CACHEDIR.TAG is a nice trick to tell backup software not to bother backing up the contents of this directory, since it can be easily regenerated. eatmydata is optional, and it speeds up debootstrap quite a bit. Running unittest with sudo Here's a simple helper to drop root as soon as possible, and regain it only when needed. Note that it needs $SUDO_UID and $SUDO_GID, that are set by sudo, to know which user to drop into:
class ProcessPrivs:
    """
    Drop root privileges and regain them only when needed
    """
    def __init__(self):
        self.orig_uid, self.orig_euid, self.orig_suid = os.getresuid()
        self.orig_gid, self.orig_egid, self.orig_sgid = os.getresgid()
        if "SUDO_UID" not in os.environ:
            raise RuntimeError("Tests need to be run under sudo")
        self.user_uid = int(os.environ["SUDO_UID"])
        self.user_gid = int(os.environ["SUDO_GID"])
        self.dropped = False
    def drop(self):
        """
        Drop root privileges
        """
        if self.dropped:
            return
        os.setresgid(self.user_gid, self.user_gid, 0)
        os.setresuid(self.user_uid, self.user_uid, 0)
        self.dropped = True
    def regain(self):
        """
        Regain root privileges
        """
        if not self.dropped:
            return
        os.setresuid(self.orig_suid, self.orig_suid, self.user_uid)
        os.setresgid(self.orig_sgid, self.orig_sgid, self.user_gid)
        self.dropped = False
    @contextlib.contextmanager
    def root(self):
        """
        Regain root privileges for the duration of this context manager
        """
        if not self.dropped:
            yield
        else:
            self.regain()
            try:
                yield
            finally:
                self.drop()
    @contextlib.contextmanager
    def user(self):
        """
        Drop root privileges for the duration of this context manager
        """
        if self.dropped:
            yield
        else:
            self.drop()
            try:
                yield
            finally:
                self.regain()
privs = ProcessPrivs()
privs.drop()
As soon as this module is loaded, root privileges are dropped, and can be regained for as little as possible using a handy context manager:
   with privs.root():
       subprocess.run(["systemd-run", ...], check=True, capture_output=True)
Using the chroot from test cases The infrastructure to setup and spin down ephemeral machine is relatively simple, once one has worked out the nspawn incantations:
class Chroot:
    """
    Manage an ephemeral chroot
    """
    running_chroots: Dict[str, "Chroot"] =  
    def __init__(self, name: str, chroot_dir: Optional[str] = None):
        self.name = name
        if chroot_dir is None:
            self.chroot_dir = self.get_chroot_dir(name)
        else:
            self.chroot_dir = chroot_dir
        self.machine_name = f"transilience- uuid.uuid4() "
    def start(self):
        """
        Start nspawn on this given chroot.
        The systemd-nspawn command is run contained into its own unit using
        systemd-run
        """
        unit_config = [
            'KillMode=mixed',
            'Type=notify',
            'RestartForceExitStatus=133',
            'SuccessExitStatus=133',
            'Slice=machine.slice',
            'Delegate=yes',
            'TasksMax=16384',
            'WatchdogSec=3min',
        ]
        cmd = ["systemd-run"]
        for c in unit_config:
            cmd.append(f"--property= c ")
        cmd.extend((
            "systemd-nspawn",
            "--quiet",
            "--ephemeral",
            f"--directory= self.chroot_dir ",
            f"--machine= self.machine_name ",
            "--boot",
            "--notify-ready=yes"))
        log.info("%s: starting machine using image %s", self.machine_name, self.chroot_dir)
        log.debug("%s: running %s", self.machine_name, " ".join(shlex.quote(c) for c in cmd))
        with privs.root():
            subprocess.run(cmd, check=True, capture_output=True)
        log.debug("%s: started", self.machine_name)
        self.running_chroots[self.machine_name] = self
    def stop(self):
        """
        Stop the running ephemeral containers
        """
        cmd = ["machinectl", "terminate", self.machine_name]
        log.debug("%s: running %s", self.machine_name, " ".join(shlex.quote(c) for c in cmd))
        with privs.root():
            subprocess.run(cmd, check=True, capture_output=True)
        log.debug("%s: stopped", self.machine_name)
        del self.running_chroots[self.machine_name]
    @classmethod
    def create(cls, chroot_name: str) -> "Chroot":
        """
        Start an ephemeral machine from the given master chroot
        """
        res = cls(chroot_name)
        res.start()
        return res
    @classmethod
    def get_chroot_dir(cls, chroot_name: str):
        """
        Locate a master chroot under test_chroots/
        """
        chroot_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "test_chroots", chroot_name))
        if not os.path.isdir(chroot_dir):
            raise RuntimeError(f" chroot_dir  does not exists or is not a chroot directory")
        return chroot_dir
# We need to use atextit, because unittest won't run
# tearDown/tearDownClass/tearDownModule methods in case of KeyboardInterrupt
# and we need to make sure to terminate the nspawn containers at exit
@atexit.register
def cleanup():
    # Use a list to prevent changing running_chroots during iteration
    for chroot in list(Chroot.running_chroots.values()):
        chroot.stop()
And here's a TestCase mixin that starts a containerized systems and opens a Mitogen connection to it:
class ChrootTestMixin:
    """
    Mixin to run tests over a setns connection to an ephemeral systemd-nspawn
    container running one of the test chroots
    """
    chroot_name = "buster"
    @classmethod
    def setUpClass(cls):
        super().setUpClass()
        import mitogen
        from transilience.system import Mitogen
        cls.broker = mitogen.master.Broker()
        cls.router = mitogen.master.Router(cls.broker)
        cls.chroot = Chroot.create(cls.chroot_name)
        with privs.root():
            cls.system = Mitogen(
                    cls.chroot.name, "setns", kind="machinectl",
                    python_path="/usr/bin/python3",
                    container=cls.chroot.machine_name, router=cls.router)
    @classmethod
    def tearDownClass(cls):
        super().tearDownClass()
        cls.system.close()
        cls.broker.shutdown()
        cls.chroot.stop()
Running tests Once the tests are set up, everything goes on as normal, except one needs to run nose2 with sudo:
sudo nose2-3
Spin up time for containers is pretty fast, and the tests drop root as soon as possible, and only regain it for as little as needed. Also, dependencies for all this are minimal and available on most systems, and the setup instructions seem pretty straightforward

25 June 2021

Enrico Zini: Parsing YAML

This is part of a series of posts on ideas for an Ansible-like provisioning system, implemented in Transilience. The time has come for me to try and prototype if it's possible to load some Transilience roles from Ansible's YAML instead of Python. The data models of Transilience and Ansible are not exactly the same. Some of the differences that come to mind: To simplify the work, I'll start from loading a single role out of Ansible, not an entire playbook. TL;DR: scroll to the bottom of the post for the conclusion! Loading tasks The first problem of loading an Ansible task is to figure out which of the keys is the module name. I have so far failed to find precise reference documentation about what keyboards are used to define a task, so I'm going by guesswork, and if needed a look at Ansible's sources. My first attempt goes by excluding all known non-module keywords:
        candidates = []
        for key in task_info.keys():
            if key in ("name", "args", "notify"):
                continue
            candidates.append(key)
        if len(candidates) != 1:
            raise RoleNotLoadedError(f"could not find a known module in task  task_info!r ")
        modname = candidates[0]
        if modname.startswith("ansible.builtin."):
            name = modname[16:]
        else:
            name = modname
This means that Ansible keywords like when or with will break the parsing, and it's fine since they are not supported yet. args seems to carry arguments to the module, when the module main argument is not a dict, as may happen at least with the command module. Task parameters One can do all sorts of chaotic things to pass parameters to Ansible tasks: for example string lists can be lists of strings or strings with comma-separated lists, and they can be preprocesed via Jinja2 templating, and they can be complex data structures that might contain strings that need Jinja2 preprocessing. I ended up mapping the behaviours I encountered in an AST-like class hierarchy which includes recursive complex structures. Variables Variables look hard: Ansible has a big free messy cauldron of global variables, and Transilience needs a predefined list of per-role variables. However, variables are mainly used inside Jinja2 templates, and Jinja2 can parse to an Abstract Syntax Tree and has useful methods to examine its AST. Using that, I managed with resonable effort to scan an Ansible role and generate a list of all the variables it uses! I can then use that list, filter out facts-specific names like ansible_domain, and use them to add variable definition to the Transilience roles. That is exciting! Handlers Before loading tasks, I load handlers as one-action roles, and index them by name. When an Ansible task notifies a handler, I can then loop up by name the roles I generated in the earlier pass, and I have all that I need. Parsed Abstract Syntax Tree Most of the results of all this parsing started looking like an AST, so I changed the rest of the prototype to generate an AST. This means that, for a well defined subset of nsible's YAML, there exists now a tool that is able to parse it into an AST and raeson with it. Transilience's playbooks gained a --ansible-to-ast option to parse an Ansible role and dump the resulting AST as JSON:
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--ansible-to-python role]
                 [--ansible-to-ast role]
Provision my VPS
optional arguments:
[...]
  -C, --check           do not perform changes, but check if changes would be
                        needed
  --ansible-to-ast role
                        print the AST of the given Ansible role as understood
                        by Transilience
The result is extremely verbose, since every parameter is itself a node in the tree, but I find it interesting. Here is, for example, a node for an Ansible task which has a templated parameter:
     
      "node": "task",
      "action": "builtin.blockinfile",
      "parameters":  
        "path":  
          "node": "parameter",
          "type": "scalar",
          "value": "/etc/aliases"
         ,
        "block":  
          "node": "parameter",
          "type": "template_string",
          "value": "root:  postmaster \n % for name, dest in aliases.items() % \n name :  dest \n % endfor % \n"
         
       ,
      "ansible_yaml":  
        "name": "configure /etc/aliases",
        "blockinfile":  ,
        "notify": "reread /etc/aliases"
       ,
      "notify": [
        "RereadEtcAliases"
      ]
     ,
Here's a node for an Ansible template task converted to Transilience's model:
     
      "node": "task",
      "action": "builtin.copy",
      "parameters":  
        "dest":  
          "node": "parameter",
          "type": "scalar",
          "value": "/etc/dovecot/local.conf"
         ,
        "src":  
          "node": "parameter",
          "type": "template_path",
          "value": "dovecot.conf"
         
       ,
      "ansible_yaml":  
        "name": "configure dovecot",
        "template":  ,
        "notify": "restart dovecot"
       ,
      "notify": [
        "RestartDovecot"
      ]
     ,
Executing The first iteration of prototype code for executing parsed Ansible roles is a little execise in closures and dynamically generated types:
    def get_role_class(self) -> Type[Role]:
        # If we have handlers, instantiate role classes for them
        handler_classes =  
        for name, ansible_role in self.handlers.items():
            handler_classes[name] = ansible_role.get_role_class()
        # Create all the functions to start actions in the role
        start_funcs = []
        for task in self.tasks:
            start_funcs.append(task.get_start_func(handlers=handler_classes))
        # Function that calls all the 'Action start' functions
        def role_main(self):
            for func in start_funcs:
                func(self)
        if self.uses_facts:
            role_cls = type(self.name, (Role,),  
                "start": lambda host: None,
                "all_facts_available": role_main
             )
            role_cls = dataclass(role_cls)
            role_cls = with_facts(facts.Platform)(role_cls)
        else:
            role_cls = type(self.name, (Role,),  
                "start": role_main
             )
            role_cls = dataclass(role_cls)
        return role_cls
Now that the parsed Ansible role is a proper AST, I'm considering redesigning that using a generic Role class that works as an AST interpreter. Generating Python I maintain a library that can turn an invoice into Python code, and I have a convenient AST. I can't not generate Python code out of an Ansible role!
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--ansible-to-python role]
                 [--ansible-to-ast role]
Provision my VPS
optional arguments:
[...]
  --ansible-to-python role
                        print the given Ansible role as Transilience Python
                        code
  --ansible-to-ast role
                        print the AST of the given Ansible role as understood
                        by Transilience
And will you look at this annotated extract:
$ ./provision --ansible-to-python mailserver
from __future__ import annotations
from typing import Any
from transilience import role
from transilience.actions import builtin, facts
# Role classes generated from Ansible handlers!
class ReloadPostfix(role.Role):
    def start(self):
        self.add(
            builtin.systemd(unit='postfix', state='reloaded'),
            name='reload postfix')
class RestartDovecot(role.Role):
    def start(self):
        self.add(
            builtin.systemd(unit='dovecot', state='restarted'),
            name='restart dovecot')
# The role, including a standard set of facts
@role.with_facts([facts.Platform])
class Role(role.Role):
    # These are the variables used by Jinja2 template files and strings. I need
    # to use Any, since Ansible variables are not typed
    aliases: Any = None
    myhostname: Any = None
    postmaster: Any = None
    virtual_domains: Any = None
    def all_facts_available(self):
        ...
        # A Jinja2 string inside a string list!
        self.add(
            builtin.command(
                argv=[
                    'certbot', 'certonly', '-d',
                    self.render_string('mail. ansible_domain '), '-n',
                    '--apache'
                ],
                creates=self.render_string(
                    '/etc/letsencrypt/live/mail. ansible_domain /fullchain.pem'
                )),
            name='obtain mail.* letsencrypt certificate')
        # A converted template task!
        self.add(
            builtin.copy(
                dest='/etc/dovecot/local.conf',
                src=self.render_file('templates/dovecot.conf')),
            name='configure dovecot',
            # Notify referring to the corresponding Role class!
            notify=RestartDovecot)
        # Referencing a variable collected from a fact!
        self.add(
            builtin.copy(dest='/etc/mailname', content=self.ansible_domain),
            name='configure /etc/mailname',
            notify=ReloadPostfix)
        ...
Conclusion Transilience can load a (growing) subset of Ansible syntax, one role at a time, which contains: The role loader in Transilience now looks for YAML when it does not find a Python module, and runs it pipelined and fast! There is code to generate Python code from an Ansible module: you can take an Ansible role, convert it to Python, and then work on it to add more complex logic, or clean it up for adding it to a library of reusable roles! Next: Ansible conditionals

30 April 2021

Paul Wise: FLOSS Activities April 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian: restart service killed by OOM killer, revert mirror redirection
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors The flower/sptag work was sponsored by my employer. All other work was done on a volunteer basis.

25 April 2021

Antoine Beaupr : Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN. Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic. This was the state of affairs when I left:
remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work
Wow, what a mess! Let's see if I can make sense of this:

Attic Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:
  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog Those were articles I was planning to write about next.
  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other. Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.
  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think? So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting? Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write. That's why this article looks like crap and has a smiley. :)

15 March 2021

Matthew Garrett: Exploring my doorbell

I've talked about my doorbell before, but started looking at it again this week because sometimes it simply doesn't send notifications to my Home Assistant setup - the push notifications appear on my phone, but the doorbell simply doesn't trigger the HTTP callback it's meant to[1]. This is obviously suboptimal, but it's also tricky to debug a device when you have no access to it.

Normally I'd just head straight in with a screwdriver, but the doorbell is shared with the other units in this building and it seemed a little anti-social to interfere with a shared resource. So I bought some broken units from ebay and pulled one of them apart. There's several boards inside, but one of them had a conveniently empty connector at the top with "TX", "RX" and "GND" labelled. Sticking a USB-serial converter on this gave me output from U-Boot, and then kernel output. Confirmation that my doorbell runs Linux, but unfortunately it didn't give me a shell prompt. My next approach would often me to just dump the flash and look for vulnerabilities that way, but this device uses TSOP-48 packaged NAND flash rather than the more convenient SPI NOR flash that I already have adapters to access. Dumping this sort of NAND isn't terribly hard, but the easiest way to do it involves desoldering it from the board and plugging it into something like a Flashcat USB adapter, and my soldering's not good enough to put it back on the board afterwards. So I wanted another approach.

U-Boot gave a short countdown to hit a key before continuing with boot, and for once hitting a key actually did something. Unfortunately it then prompted for a password, and giving the wrong one resulted in boot continuing[2]. In the past I've had good luck forcing U-Boot to drop to a prompt by simply connecting one of the data lines on SPI flash to ground while it's trying to read the kernel - the failed read causes U-Boot to error out. It turns out the same works fine on raw NAND, so I just edited the kernel boot arguments to append "init=/bin/sh" and soon I had a shell.

From here on, things were made easier by virtue of the device using the YAFFS filesystem. Unlike many flash filesystems, it's read/write, so I could make changes that would persist through to the running system. There was a convenient copy of telnetd included, but it segfaulted on startup, which reduced its usefulness. Fortunately there was also a copy of Netcat[3]. If you make a fifo somewhere on the filesystem, you can cat the fifo to a shell, pipe the shell to a netcat listener, and then pipe netcat's output back to the fifo. The shell's output all gets passed to whatever connects to netcat, and whatever's sent to netcat gets passed through the fifo back to the shell. This is, obviously, horribly insecure, but it was enough to get a root shell over the network on the running device.

The doorbell runs various bits of software, one of which is Lighttpd to provide a local API and access to the device. Another component ("nxp-client") connects to the vendor's cloud infrastructure and passes cloud commands back to the local webserver. This is where I found something strange. Lighttpd was refusing to start because its modules wanted library symbols that simply weren't present on the device. My best guess is that a firmware update went wrong and left the device in a partially upgraded state - and without a working local webserver, there was no way to perform any further updates. This may explain why this doorbell was sitting on ebay.

Anyway. Now that I had shell, I could simply dump the flash by copying it directly off the /dev/mtdblock devices - since I had netcat, I could just pipe stuff through that back to my actual computer. Now I had access to the filesystem I could extract that locally and start digging into it more deeply. One incredibly useful tool for this is qemu-user. qemu is a general purpose hardware emulation platform, usually used to emulate entire systems. But in qemu-user mode, it instead only emulates the CPU. When a piece of code tries to make a system call to access the kernel, qemu-user translates that to the appropriate calling convention for the host kernel and makes that call instead. Combined with binfmt_misc, you can configure a Linux system to be able to run Linux binaries from other architectures. One of the best things about this is that, because they're still using the host convention for making syscalls, you can run the host strace on them and see what they're doing.

What I found was that nxp-client was calling back to the cloud platform, setting up an encrypted communication channel (using ChaCha20 and a bunch of key setup stuff I couldn't be bothered picking apart) and then waiting for commands from the cloud. It would then proxy those through to the local webserver. Since I couldn't run the local lighttpd, I just wrote a trivial Python app using http.server and waited to see what requests I got. The first was a GET to a CGI script called editcgi.cgi, along with a path name. I mocked up the GET request to respond with what was on the actual filesystem. The cloud then proceeded to POST to editcgi.cgi, with the same pathname and with new file contents. editcgi.cgi is apparently able to read and write to files on the filesystem.

But this is on the interface that's exposed to the cloud client, so this didn't appear immediately useful - and, indeed, trying to hit the same CGI binary over the local network gave me a 401 unauthorized error. There's a local API spec for these doorbells, but they all refer to scripts in the bha-api namespace, and this script was in the plain cgi-bin namespace. But then I noticed that the bha-api namespace didn't actually exist in the filesystem - instead, lighttpd's mod_alias was configured to rewrite requests to bha-api through to files in cgi-bin. And by using the documented API to get a session token, I could call editcgi.cgi to read and write arbitrary files on the doorbell. Which means I can drop an extra script in /etc/rc.d/rc3.d and get a shell on my doorbell.

This all requires the ability to have local authentication credentials, so it's not a big security deal other than it allowing you to retain access to a monitoring device even after you've moved out and had your credentials revoked. I'm sure it's all fine.

[1] I can ping the doorbell from the Home Assistant machine, so it's not that the network is flaky
[2] The password appears to be hy9$gnhw0z6@ if anyone else ends up in this situation
[3] https://twitter.com/mjg59/status/654578208545751040

comment count unavailable comments

13 March 2021

Louis-Philippe V ronneau: Preventing an OpenPGP Smartcard from caching the PIN eternally

While I'm overall very happy about my migration to an OpenPGP hardware token, the process wasn't entirely seamless and I had to hack around some issues, for example the PIN caching behavior in GnuPG. As described in this bug the cache-ttl parameter in GnuPG is not implemented and thus does nothing. This means once you type in your PIN, it is cached for as long as the token is plugged. Security-wise, this is not great. Instead of manually disconnecting the token frequently, I've come up with a script that restarts scdameon if the token hasn't been used during the last X minutes. It seems to work well and I call it using this cron entry: */5 * * * * my_user /usr/local/bin/restart-scdaemon To get a log from scdaemon, you'll need a ~/.gnupg/scdaemon.conf file that looks like this:
debug-level basic
log-file /var/log/scdaemon.log
Hopefully it can be useful to others!
 #!/usr/bin/python3
# Copyright 2021, Louis-Philippe V ronneau <pollo@debian.org>
#
# This script is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
# 
# This script is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
# 
# You should have received a copy of the GNU General Public License along with
# this script. If not, see <http://www.gnu.org/licenses/>.
"""
This script restarts scdaemon after X minutes of inactivity to reset the PIN
cache. It is meant to be ran by cron each X/2 minutes.
This is needed because there is currently no way to set a cache time for
smartcards. See https://dev.gnupg.org/T3362#137811 for more details.
"""
import os
import sys
import subprocess
from datetime import datetime, timedelta
from argparse import ArgumentParser
p = ArgumentParser(description=__doc__)
p.add_argument('-l', '--log', default="/var/log/scdaemon.log",
               help='Path to the scdaemon log file.')
p.add_argument('-t', '--timeout', type=int, default="10",
               help=("Desired cache time in minutes."))
args = p.parse_args()
def get_last_line(scdaemon_log):
    """Returns the last line of the scdameon log file."""
    with open(scdaemon_log, 'rb') as f:
        f.seek(-2, os.SEEK_END)
        while f.read(1) != b'\n':
            f.seek(-2, os.SEEK_CUR)
        last_line = f.readline().decode()
    return last_line
def check_time(last_line, timeout):
    """Returns True if scdaemon hasn't been called since the defined timeout."""
    # We don't need to restart scdaemon if no gpg command has been run since
    # the last time it was restarted.
    should_restart = True
    if "OK closing connection" in last_line:
        should_restart = False
    else:
        last_time = datetime.strptime(last_line[:19], '%Y-%m-%d %H:%M:%S')
        now = datetime.now()
        delta = now - last_time
        if delta <= timedelta(minutes = timeout):
            should_restart = False
    return should_restart
def restart_scdaemon(scdaemon_log):
    """Restart scdaemon and verify the restart process was successful."""
    subprocess.run(['gpgconf', '--reload', 'scdaemon'], check=True)
    last_line = get_last_line(scdaemon_log)
    if "OK closing connection" not in last_line:
        sys.exit("Restarting scdameon has failed.")
def main():
    """Main function."""
    last_line = get_last_line(args.log)
    should_restart = check_time(last_line, args.timeout)
    if should_restart:
        restart_scdaemon(args.log)
if __name__ == "__main__":
    main()

18 February 2021

Jonathan McDowell: Hacking and Bricking the EE Opsrey 2 Mini

I ve mentioned in the past my twisted EE network setup from when I moved in to my current house. The 4GEE WiFi Mini (also known as the EE Osprey 2 Mini or the EE40VB, and actually a rebadged Alcatel Y853VB) has been sitting unused since then, so I figured I d see about trying to get a shell on it. TL;DR: Of course it s running Linux, there s a couple of test points internally which bring out the serial console, but after finding those and logging in I discovered it s running ADB on port 5555 quite happily available without authentication both via wifi and the USB port. So if you have physical or local network access, instant root shell. Well done, folks. And then I bricked it before I could do anything more interesting. There s a lack of information about this device out there - most of the links I can find are around removing the SIM lock - so I thought I d document the pieces I found just in case anyone else is trying to figure it out. It s based around a Qualcomm MDM9607 SoC, paired with 64M RAM and 256M NAND flash. Wifi is via an RTL8192ES. Kernel is 3.18.20. Busybox is v1.23.1. It s running dnsmasq but I didn t grab the version. Of course there s no source or offer of source provided. Taking it apart is fairly easy. There s a single screw to remove, just beside the SIM slot. The coloured rim can then be carefully pried away from the back, revealing the battery. There are then 4 screws in the corners which need removed in order to be able to lift out the actual PCB and gain access to the serial console test points. EE40VB PCB serial console test points My mistake was going poking around trying to figure out where the updates are downloaded from - I know I m running a slightly older release than what s current, and the device can do an automatic download + update. Top tip; don t run Jrdrecovery. It ll error on finding /cache/update.zip and wipe the main partition anyway. That ll leave you in a boot loop where the device boots the recovery partition which tries to install /cache/update.zip which of course still doesn t exist. So. Where next? First, I need to get the device into a state where I can actually do something other than watch it boot into recovery, fail to flash and reboot. Best guess at present is to try and get it to enter the Qualcomm EDL (Emergency Download) mode. That might be possible with a custom USB cable that grounds D+ on boot. Alternatively I need to probe some of the other test points on the PCB and see if grounding any of those helps enter EDL mode. I then need a suitable firehose OEM-signed programmer image. And then I need to actually get hold of a proper EE40VB firmware image, either via one of the OTA update files or possibly via an Alcatel ADSU image (though no idea how to get hold of one, other than by posting to a random GSM device forum and hoping for the kindness of strangers). More updates if/when I make progress
Qualcomm bootloader log
Format: Log Type - Time(microsec) - Message - Optional Info
Log Type: B - Since Boot(Power On Reset),  D - Delta,  S - Statistic
S - QC_IMAGE_VERSION_STRING=BOOT.BF.3.1.2-00053
S - IMAGE_VARIANT_STRING=LAATANAZA
S - OEM_IMAGE_VERSION_STRING=linux3
S - Boot Config, 0x000002e1
B -    105194 - SBL1, Start
D -     61885 - QSEE Image Loaded, Delta - (451964 Bytes)
D -     30286 - RPM Image Loaded, Delta - (151152 Bytes)
B -    459330 - Roger:boot_jrd_oem_main
B -    461526 - Welcome to key_check_poweron!!!
B -    466436 - REG0x00, rc=47
B -    469120 - REG0x01, rc=1f
B -    472018 - REG0x02, rc=1c
B -    474885 - REG0x03, rc=47
B -    477782 - REG0x04, rc=b2
B -    480558 - REG0x05, rc=
B -    483272 - REG0x06, rc=9e
B -    486139 - REG0x07, rc=
B -    488854 - REG0x08, rc=a4
B -    491721 - REG0x09, rc=80
B -    494130 - bq24295_probe: vflt/vsys/vprechg=0mV/0mV/0mV, tprechg/tfastchg=0Min/0Min, [0C, 0C]
B -    511546 - come to calculate vol and temperature!!
B -    511637 - ##############battery_core_convert_vntc: NTC_voltage=1785690
B -    517280 - battery_core_convert_vntc: <-44C, 1785690uV>, present=0
B -    529358 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    534360 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    539636 - bq24295_enable_charge: setting=0, chg_enable=-1, otg_enable=0
B -    546072 - bq24295_enable_charging: enable_charging=0
B -    552172 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    561566 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    567056 - bq24295_enable_charge: setting=0, chg_enable=0, otg_enable=0
B -    579286 - come to calculate vol and temperature!!
B -    579378 - ##############battery_core_convert_vntc: NTC_voltage=1785777
B -    585539 - battery_core_convert_vntc: <-44C, 1785777uV>, present=0
B -    597617 - charge_main: battery is plugout!!
B -    597678 - Welcome to pca955x_probe!!!
B -    601063 - pca955x_probe: PCA955X probed successfully!
D -     27511 - APPSBL Image Loaded, Delta - (179348 Bytes)
B -    633271 - QSEE Execution, Start
D -       213 - QSEE Execution, Delta
B -    638944 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting JRD RECOVERY BOOT
B -    650107 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting  RECOVERY BOOT
B -    653218 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>read_buf[0] == 0
B -    659044 - SBL1, End
D -    556137 - SBL1, Delta
S - Throughput, 2000 KB/s  (782884 Bytes,  278155 us)
S - DDR Frequency, 240 MHz
littlekernel aboot log
Android Bootloader - UART_DM Initialized!!!
[0] welcome to lk
[0] SCM call: 0x2000601 failed with :fffffffc
[0] Failed to initialize SCM
[10] platform_init()
[10] target_init()
[10] smem ptable found: ver: 4 len: 17
[10] ERROR: No devinfo partition found
[10] Neither 'config' nor 'frp' partition found
[30] voltage of NTC  is 1789872!
[30] voltage of BAT  is 3179553!
[30] usb present is 1!
[30] Loading (boot) image (4171776): start
[530] Loading (boot) image (4171776): done
[540] DTB Total entry: 25, DTB version: 3
[540] Using DTB entry 0x00000129/00010000/0x00000008/0 for device 0x00000129/00010000/0x00010008/0
[560] JRD_CHG_OFF_FEATURE!
[560] come to jrd_target_pause_for_battery_charge!
[570] power_on_status.hard_reset = 0x0
[570] power_on_status.smpl = 0x0
[570] power_on_status.rtc = 0x0
[580] power_on_status.dc_chg = 0x0
[580] power_on_status.usb_chg = 0x0
[580] power_on_status.pon1 = 0x1
[590] power_on_status.cblpwr = 0x0
[590] power_on_status.kpdpwr = 0x0
[590] power_on_status.bugflag = 0x0
[590] cmdline: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=b
[620] Updating device tree: start
[720] Updating device tree: done
[720] booting linux @ 0x80008000, ramdisk @ 0x80008000 (0), tags/device tree @ 0x81e00000
Linux kernel console boot log
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 3.18.20 (linux3@linux3) (gcc version 4.9.2 (GCC) ) #1 PREEMPT Thu Aug 10 11:57:07 CST 2017
[    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c53c7d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[    0.000000] Machine model: Qualcomm Technologies, Inc. MDM 9607 MTP
[    0.000000] Early serial console at I/O port 0x0 (options '')
[    0.000000] bootconsole [uart0] enabled
[    0.000000] Reserved memory: reserved region for node 'modem_adsp_region@0': base 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: reserved region for node 'external_image_region@0': base 0x87c00000, size 4 MiB
[    0.000000] Removed memory: created DMA memory pool at 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: initialized node modem_adsp_region@0, compatible id removed-dma-pool
[    0.000000] Removed memory: created DMA memory pool at 0x87c00000, size 4 MiB
[    0.000000] Reserved memory: initialized node external_image_region@0, compatible id removed-dma-pool
[    0.000000] cma: Reserved 4 MiB at 0x87800000
[    0.000000] Memory policy: Data cache writeback
[    0.000000] CPU: All CPU(s) started in SVC mode.
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 17152
[    0.000000] Kernel command line: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=bulk_read root=ubi0:rootfs ubi.mtd=16
[    0.000000] PID hash table entries: 512 (order: -1, 2048 bytes)
[    0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
[    0.000000] Memory: 54792K/69632K available (5830K kernel code, 399K rwdata, 2228K rodata, 276K init, 830K bss, 14840K reserved)
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xffc00000 - 0xfff00000   (3072 kB)
[    0.000000]     vmalloc : 0xc8800000 - 0xff000000   ( 872 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xc8000000   ( 128 MB)
[    0.000000]     modules : 0xbf000000 - 0xc0000000   (  16 MB)
[    0.000000]       .text : 0xc0008000 - 0xc07e6c38   (8060 kB)
[    0.000000]       .init : 0xc07e7000 - 0xc082c000   ( 276 kB)
[    0.000000]       .data : 0xc082c000 - 0xc088fdc0   ( 400 kB)
[    0.000000]        .bss : 0xc088fe84 - 0xc095f798   ( 831 kB)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] Preemptible hierarchical RCU implementation.
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] mpm_init_irq_domain(): Cannot find irq controller for qcom,gpio-parent
[    0.000000] MPM 1 irq mapping errored -517
[    0.000000] Architected mmio timer(s) running at 19.20MHz (virt).
[    0.000011] sched_clock: 56 bits at 19MHz, resolution 52ns, wraps every 3579139424256ns
[    0.007975] Switching to timer-based delay loop, resolution 52ns
[    0.013969] Switched to clocksource arch_mem_counter
[    0.019687] Console: colour dummy device 80x30
[    0.023344] Calibrating delay loop (skipped), value calculated using timer frequency.. 38.40 BogoMIPS (lpj=192000)
[    0.033666] pid_max: default: 32768 minimum: 301
[    0.038411] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.044902] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.052445] CPU: Testing write buffer coherency: ok
[    0.057057] Setting up static identity map for 0x8058aac8 - 0x8058ab20
[    0.064242]
[    0.064242] **********************************************************
[    0.071251] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.077817] **                                                      **
[    0.084302] ** trace_printk() being used. Allocating extra memory.  **
[    0.090781] **                                                      **
[    0.097320] ** This means that this is a DEBUG kernel and it is     **
[    0.103802] ** unsafe for produciton use.                           **
[    0.110339] **                                                      **
[    0.116850] ** If you see this message and you are not debugging    **
[    0.123333] ** the kernel, report this immediately to your vendor!  **
[    0.129870] **                                                      **
[    0.136380] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.142865] **********************************************************
[    0.150225] MSM Memory Dump base table set up
[    0.153739] MSM Memory Dump apps data table set up
[    0.168125] VFP support v0.3: implementor 41 architecture 2 part 30 variant 7 rev 5
[    0.176332] pinctrl core: initialized pinctrl subsystem
[    0.180930] regulator-dummy: no parameters
[    0.215338] NET: Registered protocol family 16
[    0.220475] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.284034] cpuidle: using governor ladder
[    0.314026] cpuidle: using governor menu
[    0.344024] cpuidle: using governor qcom
[    0.355452] msm_watchdog b017000.qcom,wdt: wdog absent resource not present
[    0.361656] msm_watchdog b017000.qcom,wdt: MSM Watchdog Initialized
[    0.371373] irq: no irq domain found for /soc/pinctrl@1000000 !
[    0.381268] spmi_pmic_arb 200f000.qcom,spmi: PMIC Arb Version-2 0x20010000
[    0.389733] platform 4080000.qcom,mss: assigned reserved memory node modem_adsp_region@0
[    0.397409] mem_acc_corner: 0 <--> 0 mV
[    0.401937] hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers.
[    0.408966] hw-breakpoint: maximum watchpoint size is 8 bytes.
[    0.416287] __of_mpm_init(): MPM driver mapping exists
[    0.420940] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.427235] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.432977] smd_open() before smd_init()
[    0.437544] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.445730] smd_channel_probe_now: allocation table not initialized
[    0.453100] mdm9607_s1: 1050 <--> 1350 mV at 1225 mV normal idle
[    0.458566] spm_regulator_probe: name=mdm9607_s1, range=LV, voltage=1225000 uV, mode=AUTO, step rate=4800 uV/us
[    0.468817] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.475353] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.481345] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.490124] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.497342] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.504979] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.513125] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.518335] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.524478] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.531111] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.536788] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.542886] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.549618] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.555202] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.561374] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.570613] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.583049] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal, key=qcom,therm-reset-temp err=-22. KTM continues
[    0.596926] msm_thermal:msm_thermal_dev_probe Failed reading node=/soc/qcom,msm-thermal, key=qcom,online-hotplug-core. err:-517
[    0.609370] sps:sps is ready.
[    0.613137] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.619020] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.625773] mdm9607_s2: 750 <--> 1275 mV at 750 mV normal idle
[    0.631584] mdm9607_s3_level: 0 <--> 0 mV at 0 mV normal idle
[    0.637085] mdm9607_s3_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.643092] mdm9607_s3_floor_level: 0 <--> 0 mV at 0 mV normal idle
[    0.649512] mdm9607_s3_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.655750] mdm9607_s4: 1800 <--> 1950 mV at 1800 mV normal idle
[    0.661791] mdm9607_l1: 1250 mV normal idle
[    0.666090] mdm9607_l2: 1800 mV normal idle
[    0.670276] mdm9607_l3: 1800 mV normal idle
[    0.674541] mdm9607_l4: 3075 mV normal idle
[    0.678743] mdm9607_l5: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.684904] mdm9607_l6: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.690892] mdm9607_l7: 1700 <--> 1900 mV at 1700 mV normal idle
[    0.697036] mdm9607_l8: 1800 mV normal idle
[    0.701238] mdm9607_l9: 1200 <--> 1250 mV at 1200 mV normal idle
[    0.707367] mdm9607_l10: 1050 mV normal idle
[    0.711662] mdm9607_l11: 1800 mV normal idle
[    0.716089] mdm9607_l12_level: 0 <--> 0 mV at 0 mV normal idle
[    0.721717] mdm9607_l12_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.727946] mdm9607_l12_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.734099] mdm9607_l12_floor_lebel: 0 <--> 0 mV at 0 mV normal idle
[    0.740706] mdm9607_l13: 1800 <--> 2850 mV at 2850 mV normal idle
[    0.746883] mdm9607_l14: 2650 <--> 3000 mV at 2650 mV normal idle
[    0.752515] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.759036] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.765807] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.771809] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.780586] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.787808] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.795443] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.803094] cpr_init_cpr_parameters: apc_corner: up threshold = 2, down threshold = 3
[    0.810752] cpr_init_cpr_parameters: apc_corner: CPR is enabled by default.
[    0.817687] cpr_init_cpr_efuse: apc_corner: [row:65] = 0x15000277277383
[    0.824272] cpr_init_cpr_efuse: apc_corner: CPR disable fuse = 0
[    0.830225] cpr_init_cpr_efuse: apc_corner: Corner[1]: ro_sel = 0, target quot = 631
[    0.837976] cpr_init_cpr_efuse: apc_corner: Corner[2]: ro_sel = 0, target quot = 631
[    0.845703] cpr_init_cpr_efuse: apc_corner: Corner[3]: ro_sel = 0, target quot = 899
[    0.853592] cpr_config: apc_corner: Timer count: 0x17700 (for 5000 us)
[    0.860426] apc_corner: 0 <--> 0 mV
[    0.864044] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.869261] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.875492] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.882225] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.887775] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.893941] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.900719] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.906256] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.912430] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.919472] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.932372] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal,
key=qcom,therm-reset-temp err=-22. KTM continues
[    0.946361] msm_thermal:get_kernel_cluster_info CPU0 topology not initialized.
[    0.953824] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.960300] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.968533] msm_thermal:vdd_restriction_reg_init Defer vdd rstr freq init.
[    0.975846] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.982219] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.991378] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.997544] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    1.013642] qcom,gcc-mdm9607 1800000.qcom,gcc: Registered GCC clocks
[    1.019451] clock-a7 b010008.qcom,clock-a7: Speed bin: 4 PVS Version: 0
[    1.025693] a7ssmux: set OPP pair(400000000 Hz: 1 uV) on cpu0
[    1.031314] a7ssmux: set OPP pair(1305600000 Hz: 7 uV) on cpu0
[    1.038805] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    1.043587] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.052935] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.062006] irq: no irq domain found for /soc/wcd9xxx-irq !
[    1.069884] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled                      [    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled
[    1.535696] 78b3000.serial: ttyHSL0 at MMIO 0x78b3000 (irq = 153, base_baud = 460800 [    1.544155] msm_hsl_console_setup: console setup on port #0
[    1.548727] console [ttyHSL0] enabled
[    1.548727] console [ttyHSL0] enabled
[    1.556014] bootconsole [uart0] disabled
[    1.556014] bootconsole [uart0] disabled
[    1.564212] msm_serial_hsl_init: driver initialized
[    1.578450] brd: module loaded
[    1.582920] loop: module loaded
[    1.589183] sps: BAM device 0x07984000 is not registered yet.
[    1.594234] sps:BAM 0x07984000 is registered.
[    1.598072] msm_nand_bam_init: msm_nand_bam_init: BAM device registered: bam_handle 0xc69f6400
[    1.607103] sps:BAM 0x07984000 (va:0xc89a0000) enabled: ver:0x18, number of pipes:7
[    1.616588] msm_nand_parse_smem_ptable: Parsing partition table info from SMEM
[    1.622805] msm_nand_parse_smem_ptable: SMEM partition table found: ver: 4 len: 17
[    1.630391] msm_nand_version_check: nand_major:1, nand_minor:5, qpic_major:1, qpic_minor:5
[    1.638642] msm_nand_scan: NAND Id: 0x1590aa98 Buswidth: 8Bits Density: 256 MByte
[    1.646069] msm_nand_scan: pagesize: 2048 Erasesize: 131072 oobsize: 128 (in Bytes)
[    1.653676] msm_nand_scan: BCH ECC: 8 Bit
[    1.657710] msm_nand_scan: CFG0: 0x290408c0,           CFG1: 0x0804715c
[    1.657710]             RAWCFG0: 0x2b8400c0,        RAWCFG1: 0x0005055d
[    1.657710]           ECCBUFCFG: 0x00000203,      ECCBCHCFG: 0x42040d10
[    1.657710]           RAWECCCFG: 0x42000d11, BAD BLOCK BYTE: 0x000001c5
[    1.684101] Creating 17 MTD partitions on "7980000.nand":
[    1.689447] 0x000000000000-0x000000140000 : "sbl"
[    1.694867] 0x000000140000-0x000000280000 : "mibib"
[    1.699560] 0x000000280000-0x000000e80000 : "efs2"
[    1.704408] 0x000000e80000-0x000000f40000 : "tz"
[    1.708934] 0x000000f40000-0x000000fa0000 : "rpm"
[    1.713625] 0x000000fa0000-0x000001000000 : "aboot"
[    1.718582] 0x000001000000-0x0000017e0000 : "boot"
[    1.723281] 0x0000017e0000-0x000002820000 : "scrub"
[    1.728174] 0x000002820000-0x000005020000 : "modem"
[    1.732968] 0x000005020000-0x000005420000 : "rfbackup"
[    1.738156] 0x000005420000-0x000005820000 : "oem"
[    1.742770] 0x000005820000-0x000005f00000 : "recovery"
[    1.747972] 0x000005f00000-0x000009100000 : "cache"
[    1.752787] 0x000009100000-0x000009a40000 : "recoveryfs"
[    1.758389] 0x000009a40000-0x00000aa40000 : "cdrom"
[    1.762967] 0x00000aa40000-0x00000ba40000 : "jrdresource"
[    1.768407] 0x00000ba40000-0x000010000000 : "system"
[    1.773239] msm_nand_probe: NANDc phys addr 0x7980000, BAM phys addr 0x7984000, BAM IRQ 164
[    1.781074] msm_nand_probe: Allocated DMA buffer at virt_addr 0xc7840000, phys_addr 0x87840000
[    1.791872] PPP generic driver version 2.4.2
[    1.801126] cnss_sdio 87a00000.qcom,cnss-sdio: CNSS SDIO Driver registered
[    1.807554] msm_otg 78d9000.usb: msm_otg probe
[    1.813333] msm_otg 78d9000.usb: OTG regs = c88f8000
[    1.820702] gbridge_init: gbridge_init successs.
[    1.826344] msm_otg 78d9000.usb: phy_reset: success
[    1.830294] qcom,qpnp-rtc qpnp-rtc-c7307000: rtc core: registered qpnp_rtc as rtc0
[    1.838474] i2c /dev entries driver
[    1.842459] unable to find DT imem DLOAD mode node
[    1.846588] unable to find DT imem EDLOAD mode node
[    1.851332] unable to find DT imem dload-type node
[    1.856921] bq24295-charger 4-006b: bq24295 probe enter
[    1.861161] qcom,iterm-ma = 128
[    1.864476] bq24295_otg_vreg: no parameters
[    1.868502] charger_core_register: Charger Core Version 5.0.0(Built at 20151202-21:36)!
[    1.877007] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0x3 (ok)
[    1.885559] bq24295-charger 4-006b: bq24295_set_bhot_mode 3
[    1.890150] bq24295-charger 4-006b: power_good is 1,vbus_stat is 2
[    1.896588] bq24295-charger 4-006b: bq24295_set_thermal_threshold 100
[    1.902952] bq24295-charger 4-006b: bq24295_set_sys_min 3700
[    1.908639] bq24295-charger 4-006b: bq24295_set_max_target_voltage 4150
[    1.915223] bq24295-charger 4-006b: bq24295_set_recharge_threshold 300
[    1.922119] bq24295-charger 4-006b: bq24295_set_terminal_current_limit iterm_disabled=0, iterm_ma=128
[    1.930917] bq24295-charger 4-006b: bq24295_set_precharge_current_limit bdi->prech_cur=128
[    1.940038] bq24295-charger 4-006b: bq24295_set_safty_timer 0
[    1.945088] bq24295-charger 4-006b: bq24295_set_input_voltage_limit 4520
[    1.972949] sdhci: Secure Digital Host Controller Interface driver
[    1.978151] sdhci: Copyright(c) Pierre Ossman
[    1.982441] sdhci-pltfm: SDHCI platform and OF driver helper
[    1.989092] sdhci_msm 7824900.sdhci: sdhci_msm_probe: ICE device is not enabled
[    1.995473] sdhci_msm 7824900.sdhci: No vreg data found for vdd
[    2.001530] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse_irq: error -22 reading irq cpu
[    2.009809] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for IRQ will be disabled
[    2.018600] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for cpu group will be disabled
[    2.030541] sdhci_msm 7824900.sdhci: sdhci_msm_probe: sdiowakeup_irq = 353
[    2.036867] sdhci_msm 7824900.sdhci: No vmmc regulator found
[    2.042027] sdhci_msm 7824900.sdhci: No vqmmc regulator found
[    2.048266] mmc0: SDHCI controller on 7824900.sdhci [7824900.sdhci] using 32-bit ADMA in legacy mode
[    2.080401] Welcome to pca955x_probe!!
[    2.084362] leds-pca955x 3-0020: leds-pca955x: Using pca9555 16-bit LED driver at slave address 0x20
[    2.095400] sdhci_msm 7824900.sdhci: card claims to support voltages below defined range
[    2.103125] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0x5 (ok)
[    2.114183] msm_otg 78d9000.usb: Avail curr from USB = 1500
[    2.120251] come to USB_SDP_CHARGER!
[    2.123215] Welcome to sn3199_probe!
[    2.126718] leds-sn3199 5-0064: leds-sn3199: Using sn3199 9-bit LED driver at slave address 0x64
[    2.136511] sn3199->led_en_gpio=21
[    2.139143] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0x6 (ok)
[    2.150207] usbcore: registered new interface driver usbhid
[    2.154864] usbhid: USB HID core driver
[    2.159825] sps:BAM 0x078c4000 is registered.
[    2.163573] bimc-bwmon 408000.qcom,cpu-bwmon: BW HWmon governor registered.
[    2.171080] devfreq soc:qcom,cpubw: Couldn't update frequency transition information.
[    2.178513] coresight-fuse a601c.fuse: QPDI fuse not specified
[    2.184242] coresight-fuse a601c.fuse: Fuse initialized
[    2.192407] coresight-csr 6001000.csr: CSR initialized
[    2.197263] coresight-tmc 6026000.tmc: Byte Counter feature enabled
[    2.203204] sps:BAM 0x06084000 is registered.
[    2.207301] coresight-tmc 6026000.tmc: TMC initialized
[    2.212681] coresight-tmc 6025000.tmc: TMC initialized
[    2.220071] nidnt boot config: 0
[    2.224563] mmc0: new ultra high speed SDR50 SDIO card at address 0001
[    2.231120] coresight-tpiu 6020000.tpiu: NIDnT on SDCARD only mode
[    2.236440] coresight-tpiu 6020000.tpiu: TPIU initialized
[    2.242808] coresight-replicator 6024000.replicator: REPLICATOR initialized
[    2.249372] coresight-stm 6002000.stm: STM initialized
[    2.255034] coresight-hwevent 606c000.hwevent: Hardware Event driver initialized
[    2.262312] Netfilter messages via NETLINK v0.30.
[    2.266306] nf_conntrack version 0.5.0 (920 buckets, 3680 max)
[    2.272312] ctnetlink v0.93: registering with nfnetlink.
[    2.277565] ip_set: protocol 6
[    2.280568] ip_tables: (C) 2000-2006 Netfilter Core Team
[    2.285723] arp_tables: (C) 2002 David S. Miller
[    2.290146] TCP: cubic registered
[    2.293915] NET: Registered protocol family 10
[    2.298740] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    2.303407] sit: IPv6 over IPv4 tunneling driver
[    2.308481] NET: Registered protocol family 17
[    2.312340] bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this.
[    2.325094] Bridge firewalling registered
[    2.328930] Ebtables v2.0 registered
[    2.333260] NET: Registered protocol family 27
[    2.341362] battery_core_register: Battery Core Version 5.0.0(Built at 20151202-21:36)!
[    2.348466] pmu_battery_probe: vbat_channel=21, tbat_channel=17
[    2.420236] ubi0: attaching mtd16
[    2.723941] ubi0: scanning is finished
[    2.732997] ubi0: attached mtd16 (name "system", size 69 MiB)
[    2.737783] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
[    2.744601] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
[    2.751333] ubi0: VID header offset: 2048 (aligned 2048), data offset: 4096
[    2.758540] ubi0: good PEBs: 556, bad PEBs: 2, corrupted PEBs: 0
[    2.764305] ubi0: user volume: 3, internal volumes: 1, max. volumes count: 128
[    2.771476] ubi0: max/mean erase counter: 192/64, WL threshold: 4096, image sequence number: 35657280
[    2.780708] ubi0: available PEBs: 0, total reserved PEBs: 556, PEBs reserved for bad PEB handling: 38
[    2.789921] ubi0: background thread "ubi_bgt0d" started, PID 96
[    2.796395] android_bind cdev: 0xC6583E80, name: ci13xxx_msm
[    2.801508] file system registered
[    2.804974] mbim_init: initialize 1 instances
[    2.809228] mbim_init: Initialized 1 ports
[    2.815074] rndis_qc_init: initialize rndis QC instance
[    2.819713] jrd device_desc.bcdDevice: [0x0242]
[    2.823779] android_bind scheduled usb start work: name: ci13xxx_msm
[    2.830230] android_usb gadget: android_usb ready
[    2.834845] msm_hsusb msm_hsusb: [ci13xxx_start] hw_ep_max = 32
[    2.840741] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_RESET_EVENT received
[    2.847433] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_UDC_STARTED_EVENT received
[    2.855851] input: gpio-keys as /devices/soc:gpio_keys/input/input1
[    2.861452] qcom,qpnp-rtc qpnp-rtc-c7307000: setting system clock to 1970-01-01 06:36:41 UTC (23801)
[    2.870315] open file error /usb_conf/usb_config.ini
[    2.876412] jrd_usb_start_work open file erro /usb_conf/usb_config.ini, retry_count:0
[    2.884324] parse_legacy_cluster_params(): Ignoring cluster params
[    2.889468] ------------[ cut here ]------------
[    2.894186] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    2.914366] Modules linked in:
[    2.917339] CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.20 #1
[    2.923171] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    2.931092] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    2.939175] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    2.947895] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    2.956189] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    2.963527] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    2.971380] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    2.980118] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    2.988467] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    2.996626] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.004786] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.012739] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.021459] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.030217] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.038818] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.046888] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.054432] ---[ end trace e9ec50b1ec4c8f73 ]---
[    3.059012] ------------[ cut here ]------------
[    3.063604] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    3.083858] Modules linked in:
[    3.086870] CPU: 0 PID: 1 Comm: swapper Tainted: G        W      3.18.20 #1
[    3.093814] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    3.101575] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    3.109641] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    3.118412] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    3.126745] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    3.134126] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    3.141906] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    3.150702] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    3.159120] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    3.167285] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.175444] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.183398] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.192107] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.200877] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.209475] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.217542] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.225090] ---[ end trace e9ec50b1ec4c8f74 ]---
[    3.229667] /soc/qcom,lpm-levels/qcom,pm-cluster@0: No CPU phandle, assuming single cluster
[    3.239954] qcom,cc-debug-mdm9607 1800000.qcom,debug: Registered Debug Mux successfully
[    3.247619] emac_lan_vreg: disabling
[    3.250507] mem_acc_corner: disabling
[    3.254196] clock_late_init: Removing enables held for handed-off clocks
[    3.262690] ALSA device list:
[    3.264732]   No soundcard [    3.274083] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" started, PID 102
[    3.305224] UBIFS (ubi0:0): recovery needed
[    3.466156] UBIFS (ubi0:0): recovery completed
[    3.469627] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0, name "rootfs"
[    3.476987] UBIFS (ubi0:0): LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    3.486876] UBIFS (ubi0:0): FS size: 45838336 bytes (43 MiB, 361 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
[    3.497417] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[    3.503078] UBIFS (ubi0:0): media format: w4/r0 (latest is w4/r0), UUID 4DBB2F12-34EB-43B6-839B-3BA930765BAE, small LPT model
[    3.515582] VFS: Mounted root (ubifs filesystem) on device 0:12.
[    3.520940] Freeing unused kernel memory: 276K (c07e7000 - c082c000)
INIT: version 2.88 booting

7 February 2021

Chris Lamb: Favourite books of 2020

I won't reveal precisely how many books I read in 2020, but it was definitely an improvement on 74 in 2019, 53 in 2018 and 50 in 2017. But not only did I read more in a quantitative sense, the quality seemed higher as well. There were certainly fewer disappointments: given its cultural resonance, I was nonplussed by Nick Hornby's Fever Pitch and whilst Ian Fleming's The Man with the Golden Gun was a little thin (again, given the obvious influence of the Bond franchise) the booked lacked 'thinness' in a way that made it interesting to critique. The weakest novel I read this year was probably J. M. Berger's Optimal, but even this hybrid of Ready Player One late-period Black Mirror wasn't that cringeworthy, all things considered. Alas, graphic novels continue to not quite be my thing, I'm afraid. I perhaps experienced more disappointments in the non-fiction section. Paul Bloom's Against Empathy was frustrating, particularly in that it expended unnecessary energy battling its misleading title and accepted terminology, and it could so easily have been an 20-minute video essay instead). (Elsewhere in the social sciences, David and Goliath will likely be the last Malcolm Gladwell book I voluntarily read.) After so many positive citations, I was also more than a little underwhelmed by Shoshana Zuboff's The Age of Surveillance Capitalism, and after Ryan Holiday's many engaging reboots of Stoic philosophy, his Conspiracy (on Peter Thiel and Hulk Hogan taking on Gawker) was slightly wide of the mark for me. Anyway, here follows a selection of my favourites from 2020, in no particular order:

Fiction Wolf Hall & Bring Up the Bodies & The Mirror and the Light Hilary Mantel During the early weeks of 2020, I re-read the first two parts of Hilary Mantel's Thomas Cromwell trilogy in time for the March release of The Mirror and the Light. I had actually spent the last few years eagerly following any news of the final instalment, feigning outrage whenever Mantel appeared to be spending time on other projects. Wolf Hall turned out to be an even better book than I remembered, and when The Mirror and the Light finally landed at midnight on 5th March, I began in earnest the next morning. Note that date carefully; this was early 2020, and the book swiftly became something of a heavy-handed allegory about the world at the time. That is to say and without claiming that I am Monsieur Cromuel in any meaningful sense it was an uneasy experience to be reading about a man whose confident grasp on his world, friends and life was slipping beyond his control, and at least in Cromwell's case, was heading inexorably towards its denouement. The final instalment in Mantel's trilogy is not perfect, and despite my love of her writing I would concur with the judges who decided against awarding her a third Booker Prize. For instance, there is something of the longueur that readers dislike in the second novel, although this might not be entirely Mantel's fault after all, the rise of the "ugly" Anne of Cleves and laborious trade negotiations for an uninspiring mineral (this is no Herbertian 'spice') will never match the court intrigues of Anne Boleyn, Jane Seymour and that man for all seasons, Thomas More. Still, I am already looking forward to returning to the verbal sparring between King Henry and Cromwell when I read the entire trilogy once again, tentatively planned for 2022.

The Fault in Our Stars John Green I came across John Green's The Fault in Our Stars via a fantastic video by Lindsay Ellis discussing Roland Barthes famous 1967 essay on authorial intent. However, I might have eventually come across The Fault in Our Stars regardless, not because of Green's status as an internet celebrity of sorts but because I'm a complete sucker for this kind of emotionally-manipulative bildungsroman, likely due to reading Philip Pullman's His Dark Materials a few too many times in my teens. Although its title is taken from Shakespeare's Julius Caesar, The Fault in Our Stars is actually more Romeo & Juliet. Hazel, a 16-year-old cancer patient falls in love with Gus, an equally ill teen from her cancer support group. Hazel and Gus share the same acerbic (and distinctly unteenage) wit and a love of books, centred around Hazel's obsession of An Imperial Affliction, a novel by the meta-fictional author Peter Van Houten. Through a kind of American version of Jim'll Fix It, Gus and Hazel go and visit Van Houten in Amsterdam. I'm afraid it's even cheesier than I'm describing it. Yet just as there is a time and a place for Michelin stars and Haribo Starmix, there's surely a place for this kind of well-constructed but altogether maudlin literature. One test for emotionally manipulative works like this is how well it can mask its internal contradictions while Green's story focuses on the universalities of love, fate and the shortness of life (as do almost all of his works, it seems), The Fault in Our Stars manages to hide, for example, that this is an exceedingly favourable treatment of terminal illness that is only possible for the better off. The 2014 film adaptation does somewhat worse in peddling this fantasy (and has a much weaker treatment of the relationship between the teens' parents too, an underappreciated subtlety of the book). The novel, however, is pretty slick stuff, and it is difficult to fault it for what it is. For some comparison, I later read Green's Looking for Alaska and Paper Towns which, as I mention, tug at many of the same strings, but they don't come together nearly as well as The Fault in Our Stars. James Joyce claimed that "sentimentality is unearned emotion", and in this respect, The Fault in Our Stars really does earn it.

The Plague Albert Camus P. D. James' The Children of Men, George Orwell's Nineteen Eighty-Four, Arthur Koestler's Darkness at Noon ... dystopian fiction was already a theme of my reading in 2020, so given world events it was an inevitability that I would end up with Camus's novel about a plague that swept through the Algerian city of Oran. Is The Plague an allegory about the Nazi occupation of France during World War Two? Where are all the female characters? Where are the Arab ones? Since its original publication in 1947, there's been so much written about The Plague that it's hard to say anything new today. Nevertheless, I was taken aback by how well it captured so much of the nuance of 2020. Whilst we were saying just how 'unprecedented' these times were, it was eerie how a novel written in the 1940s could accurately how many of us were feeling well over seventy years on later: the attitudes of the people; the confident declarations from the institutions; the misaligned conversations that led to accidental misunderstandings. The disconnected lovers. The only thing that perhaps did not work for me in The Plague was the 'character' of the church. Although I could appreciate most of the allusion and metaphor, it was difficult for me to relate to the significance of Father Paneloux, particularly regarding his change of view on the doctrinal implications of the virus, and spoiler alert that he finally died of a "doubtful case" of the disease, beyond the idea that Paneloux's beliefs are in themselves "doubtful". Answers on a postcard, perhaps. The Plague even seemed to predict how we, at least speaking of the UK, would react when the waves of the virus waxed and waned as well:
The disease stiffened and carried off three or four patients who were expected to recover. These were the unfortunates of the plague, those whom it killed when hope was high
It somehow captured the nostalgic yearning for high-definition videos of cities and public transport; one character even visits the completely deserted railway station in Oman simply to read the timetables on the wall.

Tinker, Tailor, Soldier, Spy John le Carr There's absolutely none of the Mad Men glamour of James Bond in John le Carr 's icy world of Cold War spies:
Small, podgy, and at best middle-aged, Smiley was by appearance one of London's meek who do not inherit the earth. His legs were short, his gait anything but agile, his dress costly, ill-fitting, and extremely wet.
Almost a direct rebuttal to Ian Fleming's 007, Tinker, Tailor has broken-down cars, bad clothes, women with their own internal and external lives (!), pathetically primitive gadgets, and (contra Mad Men) hangovers that significantly longer than ten minutes. In fact, the main aspect that the mostly excellent 2011 film adaption doesn't really capture is the smoggy and run-down nature of 1970s London this is not your proto-Cool Britannia of Austin Powers or GTA:1969, the city is truly 'gritty' in the sense there is a thin film of dirt and grime on every surface imaginable. Another angle that the film cannot capture well is just how purposefully the novel does not mention the United States. Despite the US obviously being the dominant power, the British vacillate between pretending it doesn't exist or implying its irrelevance to the matter at hand. This is no mistake on Le Carr 's part, as careful readers are rewarded by finding this denial of US hegemony in metaphor throughout --pace Ian Fleming, there is no obvious Felix Leiter to loudly throw money at the problem or a Sheriff Pepper to serve as cartoon racist for the Brits to feel superior about. By contrast, I recall that a clever allusion to "dusty teabags" is subtly mirrored a few paragraphs later with a reference to the installation of a coffee machine in the office, likely symbolic of the omnipresent and unavoidable influence of America. (The officer class convince themselves that coffee is a European import.) Indeed, Le Carr communicates a feeling of being surrounded on all sides by the peeling wallpaper of Empire. Oftentimes, the writing style matches the graceless and inelegance of the world it depicts. The sentences are dense and you find your brain performing a fair amount of mid-flight sentence reconstruction, reparsing clauses, commas and conjunctions to interpret Le Carr 's intended meaning. In fact, in his eulogy-cum-analysis of Le Carr 's writing style, William Boyd, himself a ventrioquilist of Ian Fleming, named this intentional technique 'staccato'. Like the musical term, I suspect the effect of this literary staccato is as much about the impact it makes on a sentence as the imperceptible space it generates after it. Lastly, the large cast in this sprawling novel is completely believable, all the way from the Russian spymaster Karla to minor schoolboy Roach the latter possibly a stand-in for Le Carr himself. I got through the 500-odd pages in just a few days, somehow managing to hold the almost-absurdly complicated plot in my head. This is one of those classic books of the genre that made me wonder why I had not got around to it before.

The Nickel Boys Colson Whitehead According to the judges who awarded it the Pulitzer Prize for Fiction, The Nickel Boys is "a devastating exploration of abuse at a reform school in Jim Crow-era Florida" that serves as a "powerful tale of human perseverance, dignity and redemption". But whilst there is plenty of this perseverance and dignity on display, I found little redemption in this deeply cynical novel. It could almost be read as a follow-up book to Whitehead's popular The Underground Railroad, which itself won the Pulitzer Prize in 2017. Indeed, each book focuses on a young protagonist who might be euphemistically referred to as 'downtrodden'. But The Nickel Boys is not only far darker in tone, it feels much closer and more connected to us today. Perhaps this is unsurprising, given that it is based on the story of the Dozier School in northern Florida which operated for over a century before its long history of institutional abuse and racism was exposed a 2012 investigation. Nevertheless, if you liked the social commentary in The Underground Railroad, then there is much more of that in The Nickel Boys:
Perhaps his life might have veered elsewhere if the US government had opened the country to colored advancement like they opened the army. But it was one thing to allow someone to kill for you and another to let him live next door.
Sardonic aper us of this kind are pretty relentless throughout the book, but it never tips its hand too far into on nihilism, especially when some of the visual metaphors are often first-rate: "An American flag sighed on a pole" is one I can easily recall from memory. In general though, The Nickel Boys is not only more world-weary in tenor than his previous novel, the United States it describes seems almost too beaten down to have the energy conjure up the Swiftian magical realism that prevented The Underground Railroad from being overly lachrymose. Indeed, even we Whitehead transports us a present-day New York City, we can't indulge in another kind of fantasy, the one where America has solved its problems:
The Daily News review described the [Manhattan restaurant] as nouveau Southern, "down-home plates with a twist." What was the twist that it was soul food made by white people?
It might be overly reductionist to connect Whitehead's tonal downshift with the racial justice movements of the past few years, but whatever the reason, we've ended up with a hard-hitting, crushing and frankly excellent book.

True Grit & No Country for Old Men Charles Portis & Cormac McCarthy It's one of the most tedious cliches to claim the book is better than the film, but these two books are of such high quality that even the Coen Brothers at their best cannot transcend them. I'm grouping these books together here though, not because their respective adaptations will exemplify some of the best cinema of the 21st century, but because of their superb treatment of language. Take the use of dialogue. Cormac McCarthy famously does not use any punctuation "I believe in periods, in capitals, in the occasional comma, and that's it" but the conversations in No Country for Old Men together feel familiar and commonplace, despite being relayed through this unconventional technique. In lesser hands, McCarthy's written-out Texan drawl would be the novelistic equivalent of white rap or Jar Jar Binks, but not only is the effect entirely gripping, it helps you to believe you are physically present in the many intimate and domestic conversations that hold this book together. Perhaps the cinematic familiarity helps, as you can almost hear Tommy Lee Jones' voice as Sheriff Bell from the opening page to the last. Charles Portis' True Grit excels in its dialogue too, but in this book it is not so much in how it flows (although that is delightful in its own way) but in how forthright and sardonic Maddie Ross is:
"Earlier tonight I gave some thought to stealing a kiss from you, though you are very young, and sick and unattractive to boot, but now I am of a mind to give you five or six good licks with my belt." "One would be as unpleasant as the other."
Perhaps this should be unsurprising. Maddie, a fourteen-year-old girl from Yell County, Arkansas, can barely fire her father's heavy pistol, so she can only has words to wield as her weapon. Anyway, it's not just me who treasures this book. In her encomium that presages most modern editions, Donna Tartt of The Secret History fame traces the novels origins through Huckleberry Finn, praising its elegance and economy: "The plot of True Grit is uncomplicated and as pure in its way as one of the Canterbury Tales". I've read any Chaucer, but I am inclined to agree. Tartt also recalls that True Grit vanished almost entirely from the public eye after the release of John Wayne's flimsy cinematic vehicle in 1969 this earlier film was, Tartt believes, "good enough, but doesn't do the book justice". As it happens, reading a book with its big screen adaptation as a chaser has been a minor theme of my 2020, including P. D. James' The Children of Men, Kazuo Ishiguro's Never Let Me Go, Patricia Highsmith's Strangers on a Train, James Ellroy's The Black Dahlia, John Green's The Fault in Our Stars, John le Carr 's Tinker, Tailor Soldier, Spy and even a staged production of Charles Dicken's A Christmas Carol streamed from The Old Vic. For an autodidact with no academic background in literature or cinema, I've been finding this an effective and enjoyable means of getting closer to these fine books and films it is precisely where they deviate (or perhaps where they are deficient) that offers a means by which one can see how they were constructed. I've also found that adaptations can also tell you a lot about the culture in which they were made: take the 'straightwashing' in the film version of Strangers on a Train (1951) compared to the original novel, for example. It is certainly true that adaptions rarely (as Tartt put it) "do the book justice", but she might be also right to alight on a legal metaphor, for as the saying goes, to judge a movie in comparison to the book is to do both a disservice.

The Glass Hotel Emily St. John Mandel In The Glass Hotel, Mandel somehow pulls off the impossible; writing a loose roman- -clef on Bernie Madoff, a Ponzi scheme and the ephemeral nature of finance capital that is tranquil and shimmeringly beautiful. Indeed, don't get the wrong idea about the subject matter; this is no over over-caffeinated The Big Short, as The Glass Hotel is less about a Madoff or coked-up financebros but the fragile unreality of the late 2010s, a time which was, as we indeed discovered in 2020, one event away from almost shattering completely. Mandel's prose has that translucent, phantom quality to it where the chapters slip through your fingers when you try to grasp at them, and the plot is like a ghost ship that that slips silently, like the Mary Celeste, onto the Canadian water next to which the eponymous 'Glass Hotel' resides. Indeed, not unlike The Overlook Hotel, the novel so overflows with symbolism so that even the title needs to evoke the idea of impermanence permanently living in a hotel might serve as a house, but it won't provide a home. It's risky to generalise about such things post-2016, but the whole story sits in that the infinitesimally small distance between perception and reality, a self-constructed culture that is not so much 'post truth' but between them. There's something to consider in almost every character too. Take the stand-in for Bernie Madoff: no caricature of Wall Street out of a 1920s political cartoon or Brechtian satire, Jonathan Alkaitis has none of the oleaginous sleaze of a Dominic Strauss-Kahn, the cold sociopathy of a Marcus Halberstam nor the well-exercised sinuses of, say, Jordan Belford. Alkaitis is dare I say it? eminently likeable, and the book is all the better for it. Even the C-level characters have something to say: Enrico, trivially escaping from the regulators (who are pathetically late to the fraud without Mandel ever telling us explicitly), is daydreaming about the girlfriend he abandoned in New York: "He wished he'd realised he loved her before he left". What was in his previous life that prevented him from doing so? Perhaps he was never in love at all, or is love itself just as transient as the imaginary money in all those bank accounts? Maybe he fell in love just as he crossed safely into Mexico? When, precisely, do we fall in love anyway? I went on to read Mandel's Last Night in Montreal, an early work where you can feel her reaching for that other-worldly quality that she so masterfully achieves in The Glass Hotel. Her f ted Station Eleven is on my must-read list for 2021. "What is truth?" asked Pontius Pilate. Not even Mandel cannot give us the answer, but this will certainly do for now.

Running the Light Sam Tallent Although it trades in all of the clich s and stereotypes of the stand-up comedian (the triumvirate of drink, drugs and divorce), Sam Tallent's debut novel depicts an extremely convincing fictional account of a touring road comic. The comedian Doug Stanhope (who himself released a fairly decent No Encore for the Donkey memoir in 2020) hyped Sam's book relentlessly on his podcast during lockdown... and justifiably so. I ripped through Running the Light in a few short hours, the only disappointment being that I can't seem to find videos online of Sam that come anywhere close to match up to his writing style. If you liked the rollercoaster energy of Paul Beatty's The Sellout, the cynicism of George Carlin and the car-crash invertibility of final season Breaking Bad, check this great book out.

Non-fiction Inside Story Martin Amis This was my first introduction to Martin Amis's work after hearing that his "novelised autobiography" contained a fair amount about Christopher Hitchens, an author with whom I had a one of those rather clich d parasocial relationship with in the early days of YouTube. (Hey, it could have been much worse.) Amis calls his book a "novelised autobiography", and just as much has been made of its quasi-fictional nature as the many diversions into didactic writing advice that betwixt each chapter: "Not content with being a novel, this book also wants to tell you how to write novels", complained Tim Adams in The Guardian. I suspect that reviewers who grew up with Martin since his debut book in 1973 rolled their eyes at yet another demonstration of his manifest cleverness, but as my first exposure to Amis's gift of observation, I confess that I was thought it was actually kinda clever. Try, for example, "it remains a maddening truth that both sexual success and sexual failure are steeply self-perpetuating" or "a hospital gym is a contradiction like a young Conservative", etc. Then again, perhaps I was experiencing a form of nostalgia for a pre-Gamergate YouTube, when everything in the world was a lot simpler... or at least things could be solved by articulate gentlemen who honed their art of rhetoric at the Oxford Union. I went on to read Martin's first novel, The Rachel Papers (is it 'arrogance' if you are, indeed, that confident?), as well as his 1997 Night Train. I plan to read more of him in the future.

The Collected Essays, Journalism and Letters: Volume 1 & Volume 2 & Volume 3 & Volume 4 George Orwell These deceptively bulky four volumes contain all of George Orwell's essays, reviews and correspondence, from his teenage letters sent to local newspapers to notes to his literary executor on his deathbed in 1950. Reading this was part of a larger, multi-year project of mine to cover the entirety of his output. By including this here, however, I'm not recommending that you read everything that came out of Orwell's typewriter. The letters to friends and publishers will only be interesting to biographers or hardcore fans (although I would recommend Dorian Lynskey's The Ministry of Truth: A Biography of George Orwell's 1984 first). Furthermore, many of his book reviews will be of little interest today. Still, some insights can be gleaned; if there is any inconsistency in this huge corpus is that his best work is almost 'too' good and too impactful, making his merely-average writing appear like hackwork. There are some gems that don't make the usual essay collections too, and some of Orwell's most astute social commentary came out of series of articles he wrote for the left-leaning newspaper Tribune, related in many ways to the US Jacobin. You can also see some of his most famous ideas start to take shape years if not decades before they appear in his novels in these prototype blog posts. I also read Dennis Glover's novelised account of the writing of Nineteen-Eighty Four called The Last Man in Europe, and I plan to re-read some of Orwell's earlier novels during 2021 too, including A Clergyman's Daughter and his 'antebellum' Coming Up for Air that he wrote just before the Second World War; his most under-rated novel in my estimation. As it happens, and with the exception of the US and Spain, copyright in the works published in his lifetime ends on 1st January 2021. Make of that what you will.

Capitalist Realism & Chavs: The Demonisation of the Working Class Mark Fisher & Owen Jones These two books are not natural companions to one another and there is likely much that Jones and Fisher would vehemently disagree on, but I am pairing these books together here because they represent the best of the 'political' books I read in 2020. Mark Fisher was a dedicated leftist whose first book, Capitalist Realism, marked an important contribution to political philosophy in the UK. However, since his suicide in early 2017, the currency of his writing has markedly risen, and Fisher is now frequently referenced due to his belief that the prevalence of mental health conditions in modern life is a side-effect of various material conditions, rather than a natural or unalterable fact "like weather". (Of course, our 'weather' is being increasingly determined by a combination of politics, economics and petrochemistry than pure randomness.) Still, Fisher wrote on all manner of topics, from the 2012 London Olympics and "weird and eerie" electronic music that yearns for a lost future that will never arrive, possibly prefiguring or influencing the Fallout video game series. Saying that, I suspect Fisher will resonate better with a UK audience more than one across the Atlantic, not necessarily because he was minded to write about the parochial politics and culture of Britain, but because his writing often carries some exasperation at the suppression of class in favour of identity-oriented politics, a viewpoint not entirely prevalent in the United States outside of, say, Tour F. Reed or the late Michael Brooks. (Indeed, Fisher is likely best known in the US as the author of his controversial 2013 essay, Exiting the Vampire Castle, but that does not figure greatly in this book). Regardless, Capitalist Realism is an insightful, damning and deeply unoptimistic book, best enjoyed in the warm sunshine I found it an ironic compliment that I had quoted so many paragraphs that my Kindle's copy protection routines prevented me from clipping any further. Owen Jones needs no introduction to anyone who regularly reads a British newspaper, especially since 2015 where he unofficially served as a proxy and punching bag for expressing frustrations with the then-Labour leader, Jeremy Corbyn. However, as the subtitle of Jones' 2012 book suggests, Chavs attempts to reveal the "demonisation of the working class" in post-financial crisis Britain. Indeed, the timing of the book is central to Jones' analysis, specifically that the stereotype of the "chav" is used by government and the media as a convenient figleaf to avoid meaningful engagement with economic and social problems on an austerity ridden island. (I'm not quite sure what the US equivalent to 'chav' might be. Perhaps Florida Man without the implications of mental health.) Anyway, Jones certainly has a point. From Vicky Pollard to the attacks on Jade Goody, there is an ignorance and prejudice at the heart of the 'chav' backlash, and that would be bad enough even if it was not being co-opted or criminalised for ideological ends. Elsewhere in political science, I also caught Michael Brooks' Against the Web and David Graeber's Bullshit Jobs, although they are not quite methodical enough to recommend here. However, Graeber's award-winning Debt: The First 5000 Years will be read in 2021. Matt Taibbi's Hate Inc: Why Today's Media Makes Us Despise One Another is worth a brief mention here though, but its sprawling nature felt very much like I was reading a set of Substack articles loosely edited together. And, indeed, I was.

The Golden Thread: The Story of Writing Ewan Clayton A recommendation from a dear friend, Ewan Clayton's The Golden Thread is a journey through the long history of the writing from the Dawn of Man to present day. Whether you are a linguist, a graphic designer, a visual artist, a typographer, an archaeologist or 'just' a reader, there is probably something in here for you. I was already dipping my quill into calligraphy this year so I suspect I would have liked this book in any case, but highlights would definitely include the changing role of writing due to the influence of textual forms in the workplace as well as digression on ergonomic desks employed by monks and scribes in the Middle Ages. A lot of books by otherwise-sensible authors overstretch themselves when they write about computers or other technology from the Information Age, at best resulting in bizarre non-sequiturs and dangerously Panglossian viewpoints at worst. But Clayton surprised me by writing extremely cogently and accurate on the role of text in this new and unpredictable era. After finishing it I realised why for a number of years, Clayton was a consultant for the legendary Xerox PARC where he worked in a group focusing on documents and contemporary communications whilst his colleagues were busy inventing the graphical user interface, laser printing, text editors and the computer mouse.

New Dark Age & Radical Technologies: The Design of Everyday Life James Bridle & Adam Greenfield I struggled to describe these two books to friends, so I doubt I will suddenly do a better job here. Allow me to quote from Will Self's review of James Bridle's New Dark Age in the Guardian:
We're accustomed to worrying about AI systems being built that will either "go rogue" and attack us, or succeed us in a bizarre evolution of, um, evolution what we didn't reckon on is the sheer inscrutability of these manufactured minds. And minds is not a misnomer. How else should we think about the neural network Google has built so its translator can model the interrelation of all words in all languages, in a kind of three-dimensional "semantic space"?
New Dark Age also turns its attention to the weird, algorithmically-derived products offered for sale on Amazon as well as the disturbing and abusive videos that are automatically uploaded by bots to YouTube. It should, by rights, be a mess of disparate ideas and concerns, but Bridle has a flair for introducing topics which reveals he comes to computer science from another discipline altogether; indeed, on a four-part series he made for Radio 4, he's primarily referred to as "an artist". Whilst New Dark Age has rather abstract section topics, Adam Greenfield's Radical Technologies is a rather different book altogether. Each chapter dissects one of the so-called 'radical' technologies that condition the choices available to us, asking how do they work, what challenges do they present to us and who ultimately benefits from their adoption. Greenfield takes his scalpel to smartphones, machine learning, cryptocurrencies, artificial intelligence, etc., and I don't think it would be unfair to say that starts and ends with a cynical point of view. He is no reactionary Luddite, though, and this is both informed and extremely well-explained, and it also lacks the lazy, affected and Private Eye-like cynicism of, say, Attack of the 50 Foot Blockchain. The books aren't a natural pair, for Bridle's writing contains quite a bit of air in places, ironically mimics the very 'clouds' he inveighs against. Greenfield's book, by contrast, as little air and much lower pH value. Still, it was more than refreshing to read two technology books that do not limit themselves to platitudinal booleans, be those dangerously naive (e.g. Kevin Kelly's The Inevitable) or relentlessly nihilistic (Shoshana Zuboff's The Age of Surveillance Capitalism). Sure, they are both anti-technology screeds, but they tend to make arguments about systems of power rather than specific companies and avoid being too anti-'Big Tech' through a narrower, Silicon Valley obsessed lens for that (dipping into some other 2020 reading of mine) I might suggest Wendy Liu's Abolish Silicon Valley or Scott Galloway's The Four. Still, both books are superlatively written. In fact, Adam Greenfield has some of the best non-fiction writing around, both in terms of how he can explain complicated concepts (particularly the smart contract mechanism of the Ethereum cryptocurrency) as well as in the extremely finely-crafted sentences I often felt that the writing style almost had no need to be that poetic, and I particularly enjoyed his fictional scenarios at the end of the book.

The Algebra of Happiness & Indistractable: How to Control Your Attention and Choose Your Life Scott Galloway & Nir Eyal A cocktail of insight, informality and abrasiveness makes NYU Professor Scott Galloway uncannily appealing to guys around my age. Although Galloway definitely has his own wisdom and experience, similar to Joe Rogan I suspect that a crucial part of Galloway's appeal is that you feel you are learning right alongside him. Thankfully, 'Prof G' is far less err problematic than Rogan (Galloway is more of a well-meaning, spirited centrist), although he, too, has some pretty awful takes at time. This is a shame, because removed from the whirlwind of social media he can be really quite considered, such as in this long-form interview with Stephanie Ruhle. In fact, it is this kind of sentiment that he captured in his 2019 Algebra of Happiness. When I look over my highlighted sections, it's clear that it's rather schmaltzy out of context ("Things you hate become just inconveniences in the presence of people you love..."), but his one-two punch of cynicism and saccharine ("Ask somebody who purchased a home in 2007 if their 'American Dream' came true...") is weirdly effective, especially when he uses his own family experiences as part of his story:
A better proxy for your life isn't your first home, but your last. Where you draw your last breath is more meaningful, as it's a reflection of your success and, more important, the number of people who care about your well-being. Your first house signals the meaningful your future and possibility. Your last home signals the profound the people who love you. Where you die, and who is around you at the end, is a strong signal of your success or failure in life.
Nir Eyal's Indistractable, however, is a totally different kind of 'self-help' book. The important background story is that Eyal was the author of the widely-read Hooked which turned into a secular Bible of so-called 'addictive design'. (If you've ever been cornered by a techbro wielding a Wikipedia-thin knowledge of B. F. Skinner's behaviourist psychology and how it can get you to click 'Like' more often, it ultimately came from Hooked.) However, Eyal's latest effort is actually an extended mea culpa for his previous sin and he offers both high and low-level palliative advice on how to avoid falling for the tricks he so studiously espoused before. I suppose we should be thankful to capitalism for selling both cause and cure. Speaking of markets, there appears to be a growing appetite for books in this 'anti-distraction' category, and whilst I cannot claim to have done an exhausting study of this nascent field, Indistractable argues its points well without relying on accurate-but-dry "studies show..." or, worse, Gladwellian gotchas. My main criticism, however, would be that Eyal doesn't acknowledge the limits of a self-help approach to this problem; it seems that many of the issues he outlines are an inescapable part of the alienation in modern Western society, and the only way one can really avoid distraction is to move up the income ladder or move out to a 500-acre ranch.

12 January 2021

John Goerzen: The Good, Bad, and Scary of the Banning of Donald Trump, and How Decentralization Makes It All Better

It is undeniable that banning Donald Trump from Facebook, Twitter, and similar sites is a benefit for the moment. It may well save lives, perhaps lots of lives. But it raises quite a few troubling issues. First, as EFF points out, these platforms have privileged speakers with power, especially politicians, over regular users. For years now, it has been obvious to everyone that Donald Trump has been violating policies on both platforms, and yet they did little or nothing about it. The result we saw last week was entirely forseeable and indeed, WAS forseen, including by elements in those companies themselves. (ACLU also raises some good points) Contrast that with how others get treated. Facebook, two days after the coup attempt, banned Benjamin Wittes, apparently because he mentioned an Atlantic article opposed to nutcase conspiracy theories. The EFF has also documented many more egregious examples: taking down documentation of war crimes, childbirth images, black activists showing the racist messages they received, women discussing online harassment, etc. The list goes on; YouTube, for instance, has often been promoting far-right violent videos while removing peaceful LGBTQ ones. In short, have we simply achieved legal censorship by outsourcing it to dominant corporations? It is worth pausing at this point to recognize two important princples: First, that we do not see it as right to compel speech. Secondly, that there exist communications channels and other services that nobody is calling on to suspend Donald Trump. Let s dive into those a little bit. There have been no prominent calls for AT&T, Verizon, Gmail, or whomever provides Trump and his campaign with cell phones or email to suspend their service to him. Moreover, the gas stations that fuel his vehicles and the airports that service his plane continue to provide those services, and nobody has seriously questioned that, either. Even his Apple phone that he uses to post to Twitter remains, as far as I know, fully active. Secondly, imagine you were starting up a small web forum focused on raising tomato plants. It is, and should be, well within your rights to keep tomato-haters out, as well as people that have no interest in tomatoes but would rather talk about rutabagas, politics, or Mars. If you are going to host a forum about tomatoes, you have the right to keep it a forum about tomatoes; you cannot be forced to distribute someone else s speech. Likewise in traditional media, a newspaper cannot be forced to print every letter to the editor in full. In law, there is a notion of a common carrier, that provides services to the general public without discrimination. Phone companies and ISPs fall under this. Facebook, Twitter, and tomato sites don t. But consider what happens if Facebook bans you. You might be using Facebook-owned Whatsapp to communicate with family and friends, and suddenly find yourself unable to ask someone to pick you up. Or your treasured family photos might be in Facebook-owned Instagram, lost forever. It s not just Facebook; similar things happen with Google, locking people out of their phones and laptops, their emails, even their photos. Is it right that Facebook and Google aren t regulated as common carriers? Perhaps, or perhaps we need some line of demarcation between their speech-to-the-public services (Facebook timeline posts, YouTube) and private communication (Whatsapp, Gmail). It s a thorny issue; should government be regulating speech instead? That s also fraught. So is corporate control. Decentralization Helps Dramatically With email, you get to pick your email provider (yes, there are two or three big ones, but still plenty of others). Each email provider will have its own set of things it considers acceptable, and its own set of other servers and accounts it s willing to exchange mail with. (It is extremely common for mail providers to choose not to accept mail from various other mail servers based on ISP, IP address, reputation, and so forth.) What if we could do something like that for Twitter and Facebook? Let you join whatever instance you like. Maybe one instance is all about art and they don t talk about politics. Or another is all about Free Software and they don t have advertising. And then there are plenty of open instances that accept anything that s respectful. And, like email, people of one server can interact with those using another just as easily as if they were using the same one. Well, this isn t hypothetical; it already exists in the Fediverse. The most common option is Mastodon, and it so happens that a month ago I wrote about its benefits for other reasons, and included some links on getting started. There is no reason that we must all let our online speech be controlled by companies with a profit motive to keep hate speech on their platforms. There is no reason that we must all have a single set of rules, or accept strong corporate or government control, either. The quality of conversation on Mastodon is far higher than either Twitter or Facebook; decentralization works and it s here today.

13 September 2020

Russell Coker: Setting Up a Dell T710 Server with DRAC6

I ve just got a Dell T710 server for LUV and I ve been playing with the configuration options. Here s a list of what I ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn t work for that. I think that a list of how things are broken and what to avoid is more useful. BIOS Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won t want to upgrade the BIOS (why risk breaking something when it s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it s not too bad). IDRAC IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the T710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration. Web Interface By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge T710 recommends IE 6. To get a ssl cert that matches the name you want to use (and doesn t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option --config-dir /etc/letsencrypt-drac to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the --csr option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format *_chain.pem . You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in. When you access RAC via ssh (see below) you can run the command racadm sslcsrgen to generate a CSR that can be used by certbot. So it s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I d script it. SSH The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like TERM=vt100 ssh root@drac . Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access. There is a limit to the number of open sessions for managing IDRAC, when you ssh to the IDRAC you can run racadm getssninfo to get a list of sessions and racadm closessn -i NUM to close a session. The closessn command takes a -a option to close all sessions but that just gives an error saying that you can t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem. I couldn t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn t get a match so there s a reasonable chance that it s unique to the server (please comment if you know more about this). Don t Use Graphical Console The T710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was Exception in thread Timer-0 java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated , Google didn t turn up any solutions to this. Java 8 didn t have that problem but had a connection failed error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn t help. The suggestion of running a VM with an old web browser to access IDRAC didn t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this. Serial Console Fortunately IDRAC supports a serial console. Here s a page summarising Serial console setup for DRAC [1]. Once you have done that put console=tty0 console=ttyS1,115200n8 on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command console com2 (there is no option for using a different com port). The serial port seems to be unavailable through the web interface. If I was supporting many Dell systems I d probably setup a ssh to JavaScript gateway to provide a web based console access. It s disappointing that Dell didn t include this. If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:
/admin1-> console com2
console: Serial Device 2 is currently in use
So far the only way I ve discovered to get console access again after that is the command racadm racreset . If anyone knows of a better way please let me know. As an aside having racreset being so close to racresetcfg (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea. Host Based Management
deb http://linux.dell.com/repo/community/ubuntu xenial openmanage
The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going. Here are some useful commands:
# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0
RAID Control The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command perccli show gives an overview of the configuration, but the command perccli /c0 show to give detailed information on controller 0 SEGVs and the kernel logs a vsyscall attempted with vsyscall=none message. Here s an overview of the vsyscall enmulation issue [2]. Basically I could add vsyscall=emulate to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root. Some versions of IDRAC have a racadm raid command that can be run from a ssh session to perform RAID administration remotely, mine doesn t have that. As an aside the help for the RAC system doesn t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information. I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn t accept my key presses when I tried to do so. In any case this requires downtime and I d like to be able to change hard drives without rebooting. I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run vsyscall_trace perccli and it works! Thanks Geoffrey Thomas for writing this! Here are some perccli commands:
# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0
When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

30 August 2020

Jonathan Carter: The metamorphosis of Loopy Loop

Dealing with the void during MiniDebConf Online #1 Between 28 and 31 May this year, we set out to create our first ever online MiniDebConf for Debian. Many people have been meaning to do something similar for a long time, but it just didn t work out yet. With many of us being in lock down due to COVID-19, and with the strong possibility looming that DebConf20 might have had to become an online event, we rushed towards organising the first ever Online MiniDebConf and put together some form of usable video stack for it. I could go into all kinds of details on the above, but this post is about a bug that lead to a pretty nifty feature for DebConf20. The tool that we use to capture Jitsi calls is called Jibri (Jitsi Broadcasting Infrustructure). It had a bug (well, bug for us, but it s an upstream feature) where Jibri would hang up after 30s of complete silence, because it would assume that the call has ended and that the worker can be freed up again. This would result in the stream being ended at the end of every talk, so before the next talk, someone would have to remember to press play again in their media player or on the video player on the stream page. Hrmph. Easy solution on the morning that the conference starts? I was testing a Debian Live image the night before in a KVM and thought that I might as well just start a Jitsi call from there and keep a steady stream of silence so that Jibri doesn t hang up. It worked! But the black screen and silence on stream was a bit eery. Because this event was so experimental in nature, and because we were on such an incredibly tight timeline, we opted not to seek sponsors for this event, so there was no sponsors loop that we d usually stream during a DebConf event. Then I thought Ah! I could just show the schedule! .

The stream looked bright and colourful (and was even useful!) and Jitsi/Jibri didn t die. I thought my work was done. As usual, little did I know how untrue that was. The silence was slightly disturbing after the talks, and people asked for some music. Playing music on my VM and capturing the desktop audio in to Jitsi was just a few pulseaudio settings away, so I spent two minutes finding some freely licensed tracks that sounded ok enough to just start playing on the stream. I came across mini-albums by Captive Portal and Cinema Noir, During the course of the MiniDebConf Online I even started enjoying those. Someone also pointed out that it would be really nice to have a UTC clock on the stream. I couldn t find a nice clock in a hurry so I just added a tmux clock in the meantime while we deal with the real-time torrent of issues that usually happens when organising events like this.
Speaking of issues, during our very first talk of the last day, our speaker had a power cut during the talk and abruptly dropped off. Oops! So, since I had a screenshare open from the VM to the stream, I thought I d just pop in a quick message in a text editor to let people know that we re aware of it and trying to figure out what s going on.
In the end, MiniDebConf Online worked out all right. Besides the power cut for our one speaker, and another who had a laptop that was way too under-powered to deal with video, everything worked out very well. Even the issues we had weren t show-stoppers and we managed to work around them.

DebConf20 Moves Online For DebConf, we usually show a sponsors loop in between sessions. It s great that we give our sponsors visibility here, but in reality people see the sponsors loop and think Talk over! and then they look away. It s also completely silent and doesn t provide any additional useful information. I was wondering how I could take our lessons from MDCO#1 and integrate our new tricks with the sponsors loop. That is, add the schedule, time, some space to type announcements on the screen and also add some loopable music to it. I used OBS before in making my videos, and like the flexibility it provides when working with scenes and sources. A scene is what you would think of as a screen or a document with its own collection of sources or elements. For example, a scene might contain sources such as a logo, clock, video, image, etc. A scene can also contain another scene. This is useful if you want to contain a banner or play some background music that is shared between scenes.

The above screenshots illustrate some basics of scenes and sources. First with just the DC20 banner, and then that used embedded in another scene. For MDCO#1, I copied and pasted the schedule into a LibreOffice Impress slide that was displayed on the stream. Having to do this for all 7 days of DebConf, plus dealing with scheduling changes would be daunting. So, I started to look in to generating some schedule slides programmatically. Stefano then pointed me to the Happening Now page on the DebConf website, where the current schedule block is displayed. So all I would need to do in OBS was to display a web page. Nice! Unfortunately the OBS in Debian doesn t have the ability to display web pages out of the box (we need to figure out CEF in Debian), but fortunately someone provides a pre-compiled version of the plugin called Linux Browser that works just fine. This allowed me to easily add the schedule page in its own scene. Being able to display a web page solved another problem. I wasn t fond of having to type / manage the announcements in OBS. It would either be a bit prone to user error, and if you want to edit the text while the loop is running, you d have to disrupt the loop, go to the foreground scene, and edit the text before resuming the loop. That s a bit icky. Then I thought that we could probably just get that from a web page instead. We could host some nice html snippet in a repository in salsa, and then anyone could easily commit an MR to update the announcement. But then I went a step further, use an etherpad! Then anyone in the orga team can quickly update the announcement and it would be instantly changed on the stream. Nice! So that small section of announcement text on the screen is actually a whole web browser with an added OBS filter to crop away all the pieces we don t want. Overkill? Sure, but it gave us a decent enough solution that worked in time for the start of DebConf. Also, being able to type directly on to the loop screen works out great especially in an emergency. Oh, and uhm the clock is also a website rendered in its own web browser :-P
So, I had the ability to make scenes, add elements and add all the minimal elements I wanted in there. Great! But now I had to figure out how to switch scenes automatically. It s probably worth mentioning that I only found some time to really dig into this right before DebConf started, so with all of this I was scrambling to find things that would work without too many bugs while also still being practical. Now I needed the ability to switch between the scenes automatically / programmatically. I had never done this in OBS before. I know it has some API because there are Android apps that you can use to control OBS with from your phone. I discovered that it had an automatic scene switcher, but it s very basic. It can only switch based on active window, which can be useful in some cases, but since we won t have any windows open other than OBS, this tool was basically pointless.
After some quick searches, I found a plugin called Advanced Scene Switcher. This plugin can do a lot more, but has some weird UI choices, and is really meant for gamers and other types of professional streamers to help them automate their work flow and doesn t seem at all meant to be used for a continuous loop, but, it worked, and I could make it do something that will work for us during the DebConf. I had a chicken and egg problem because I had to figure out a programming flow, but didn t really have any content to work with, or an idea of all the content that we would eventually have. I ve been toying with the idea in my mind and had some idea that we could add fun facts, postcards (an image with some text), time now in different timezones, Debian news (maybe procured by the press team), cards that contain the longer announcements that was sent to debconf-announce, perhaps a shout out or two and some photos from previous DebConfs like the group photos. I knew that I wouldn t be able to build anything substantial by the time DebConf starts, but adding content to OBS in between talks is relatively easy, so we could keep on building on it during DebConf. Nattie provided the first shout out, and I made 2 video loops with the DC18/19 pictures and also two Did you know cards. So the flow I ended up with was: Sponsors -> Happening Now -> Random video (which would be any of those clips) -> Back to sponsors. This ended up working pretty well for quite a while. With the first batch of videos the sponsor loop would come up on average about every 2 minutes, but as much shorter clips like shout outs started to come in faster and faster, it made sense to play a few 2-3 shout-outs before going back to sponsors. So here is a very brief guide on how I set up the sequencing in Advanced Scene Switcher.
If no condition was met, a video would play from the Random tab.
Then in the Random tab, I added the scenes that were part of the random mix. Annoyingly, you have to specify how long it should play for. If you don t, the no condition thingy is triggered and another video is selected. The time is also the length of the video minus one second, because
You can t just say that a random video should return back to a certain scene, you have to specify that in the sequence tab for each video. Why after 1 second? Because, at least in my early tests, and I didn t circle back to this, it seems like 0s can randomly either mean instantly, or never. Yes, this ended up being a bit confusing and tedious, and considering the late hours I worked on this, I m surprised that I didn t manage to screw it up completely at any point. I also suspected that threads would eventually happen. That is, when people create video replies to other videos. We had 3 threads in total. There was a backups thread, beverage thread and an impersonation thread. The arrow in the screenshot above points to the backups thread. I know it doesn t look that complicated, but it was initially somewhat confusing to set up and make sense out of it.
For the next event, the Advanced Scene Switcher might just get some more taming, or even be replaced entirely. There are ways to drive OBS by API, and even the Advanced Scene Switcher tool can be driven externally to some degree, but I think we definitely want to replace it by the next full DebConf. We had the problem that when a talk ended, we would return to the loop in the middle of a clip, which felt very unnatural and sometimes even confusing. So Stefano helped me with a helper script that could read the socket from Vocto, which I used to write either Loop or Standby to a file, and then the scene switcher would watch that file and keep the sponsors loop ready for start while the talks play. Why not just switch to sponsors when the talk ends? Well, the little bit of delay in switching would mean that you would see a tiny bit of loop every time before switching to sponsors. This is also why we didn t have any loop for the ad-hoc track (that would have probably needed another OBS instance, we ll look more into solutions for this for the future).
Then for all the clips. There were over 50 of them. All of them edited by hand in kdenlive. I removed any hard clicks, tried to improve audibility, remove some sections at the beginning and the end that seemed extra and added some music that would reduce in volume when someone speaks. In the beginning, I had lots of fun with choosing music for the clips. Towards the end, I had to rush them through and just chose the same tune whether it made sense or not. For comparison of what a difference the music can make, compare the original and adapted version for Valhalla s clip above, or this original and adapted video from urbec. This part was a lot more fun than dealing with the video sequencer, but I also want to automate it a bit. When I can fully drive OBS from Python I ll likely instead want to show those cards and control music volume from Python (what could possibly go wrong ). The loopy name happened when I requested an @debconf.org alias for this. I was initially just thinking about loop@debconf.org but since I wanted to make it clear that the purpose of this loop is also to have some fun, I opted for loopy instead:
I was really surprised by how people took to loopy. I hoped it would be good and that it would have somewhat positive feedback, but the positive feedback was just immense. The idea was that people typically saw it in between talks. But a few people told me they kept it playing after the last talk of the day to watch it in the background. Some asked for the music because they want to keep listening to it while working (and even for jogging!?). Some people also asked for recordings of the loop because they want to keep it for after DebConf. The shoutouts idea proved to be very popular. Overall, I m very glad that people enjoyed it and I think it s safe to say that loopy will be back for the next event.
Also throughout this experiment Loopy Loop turned into yet another DebConf mascot. We gain one about every DebConf, some by accident and some on purpose. This one was not quite on purpose. I meant to make an image for it for salsa, and started with an infinite loop symbol. That s a loop, but by just adding two more solid circles to it, it looks like googly eyes, now it s a proper loopy loop! I like the progress we ve made on this, but there s still a long way to go, and the ideas keep heaping up. The next event is quite soon (MDCO#2 at the end of November, and it seems that 3 other MiniDebConf events may also be planned), but over the next few events there will likely be significantly better graphics/artwork, better sequencing, better flow and more layout options. I hope to gain some additional members in the team to deal with incoming requests during DebConf. It was quite hectic this time! The new OBS also has a scripting host that supports Python, so I should be able to do some nice things even within OBS without having to drive it externally (like, display a clock without starting a web browser).

The Loopy Loop Music The two mini albums that mostly played during the first few days were just a copy and paste from the MDCO#1 music, which was:

For shoutout tracks, that were later used in the loop too (because it became a bit monotonous), most of the tracks came from freepd.com: I have much more things to say about DebConf20, but I ll keep that for another post, and hopefully we can get all the other video stuff in a post from the video team, because I think there s been some real good work done for this DebConf. Also thanks to Infomaniak who was not only a platinum sponsor for this DebConf, but they also provided us with plenty of computing power to run all the video stuff on. Thanks again!

22 July 2020

Enrico Zini: nc sudo

Question: what does this command do?
# Don't do this
nc localhost 12345   sudo tar xf -
Answer: it sends the password typed into sudo to the other endpoint of netcat. I can reproduce this with both nc.traditional and nc.openbsd. One might be tempted to just put sudo in front of everything, but it'll mean that only nc will run as root:
# This is probably not what you want
sudo nc localhost 12345   tar xf -
The fix that I will never remember, thanks to twb on IRC, is to close nc's stdin:
<&- nc localhost 12345   sudo tar xf -
Or flip the table and just use sudo -s:
$ sudo -s
# nc localhost 12345   tar xf -
Updates Harald Koenig suggested two alternative spellings that might be easier to remember:
nc localhost 12345 < /dev/null   sudo tar xf -
< /dev/null nc localhost 12345   sudo tar xf -
And thinking along those lines, there could also be the disappointed face variant:
:  nc localhost 12345   sudo tar xf -
Matthias Urlichs suggested the approach of precaching sudo's credentials, making the rest of the command lines more straightforward (and TIL: sudo id):
sudo id
nc localhost 12345   sudo tar xf -
Or even better:
sudo id && nc localhost 12345   sudo tar xf -
Shortcomings of nc tar Tomas Janousek commented:
There's one more problem with a plain tar nc tar without redirection or extra parameteres: it doesn't know when to stop. So the best way to use it, I believe, is: tar c nc -N nc -d tar x The -N option terminates the sending end of the connection, and the -d option tells the receiving netcat to never read any input. These two parameters, I hope, should also fix your sudo/password problem. Hope it helps!

16 July 2020

Russell Coker: Windows 10 on Debian under KVM

Here are some things that you need to do to get Windows 10 running on a Debian host under KVM. UEFI Booting UEFI is big and complex, but most of what it does isn t needed at all. If all you want to do is boot from an image of a disk with a GPT partition table then you just install the package ovmf and add something like the following to your KVM start script:
UEFI"-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"
Note that some of the documentation on this doesn t have the OVMF_VARS.fd file set to readonly. Allowing writes to that file means that the VM boot process (and maybe later) can change EFI variables that affect later boots and other VMs if they all share the same file. For a basic boot you don t need to change variables so you want it read-only. Also having it read-only is necessary if you want to run KVM as non-root. As an experiment I tried booting without the OVMF_VARS.fd file, it didn t boot and then even after configuring it to use the OVMF_VARS.fd file again Windows gave a boot error about the boot configuration data file that required booting from recovery media. Apparently configuration mistakes with EFI can mess up the Windows installation, so be careful and backup the Windows installation regularly! Linux can boot from EFI but you generally don t want to unless the boot device is larger than 2TB. It s relatively easy to convert a Linux installation on a GPT disk to a virtual image on a DOS partition table disk or on block devices without partition tables and that gives a faster boot. If the same person runs the host hardware and the VMs then the best choice for Linux is to have no partition tables just one filesystem per block device (which makes resizing much easier) and have the kernel passed as a parameter to kvm. So booting a VM from EFI is probably only useful for booting Windows VMs and for Linux boot loader development and testing. As an aside, the Debian Wiki page about Secure Boot on a VM [4] was useful for this. It s unfortunate that it and so much of the documentation about UEFI is about secure boot which isn t so useful if you just want to boot a system without regard to the secure boot features. Emulated IDE Disks Debian kernels (and probably kernels from many other distributions) are compiled with the paravirtualised storage device drivers. Windows by default doesn t support such devices so you need to emulate an IDE/SATA disk so you can boot Windows and install the paravirtualised storage driver. The following configuration snippet has a commented line for paravirtualised IO (which is fast) and an uncommented line for a virtual IDE/SATA disk that will allow an unmodified Windows 10 installation to boot.
#DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"
DRIVE="-drive id=disk,format=raw,file=/home/kvm/windows10,if=none -device ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0"
Spice Video Spice is an alternative to VNC, Here is the main web site for Spice [1]. Spice has many features that could be really useful for some people, like audio, sharing USB devices from the client, and streaming video support. I don t have a need for those features right now but it s handy to have options. My main reason for choosing Spice over VNC is that the mouse cursor in the ssvnc doesn t follow the actual mouse and can be difficult or impossible to click on items near edges of the screen. The following configuration will make the QEMU code listen with SSL on port 1234 on all IPv4 addresses. Note that this exposes the Spice password to anyone who can run ps on the KVM server, I ve filed Debian bug #965061 requesting the option of a password file to address this. Also note that the qxl virtual video hardware is VGA compatible and can be expected to work with OS images that haven t been modified for virtualisation, but that they work better with special video drivers.
KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl
To connect to the Spice server I installed the spice-client-gtk package in Debian and ran the following command:
spicy -h kvm.example.com -s 1234 -w xxxxxxxx
Note that this exposes the Spice password to anyone who can run ps on the system used as a client for Spice, I ve filed Debian bug #965060 requesting the option of a password file to address this. This configuration with an unmodified Windows 10 image only supported 800*600 resolution VGA display. Networking To setup bridged networking as non-root you need to do something like the following as root:
chgrp kvm /usr/lib/qemu/qemu-bridge-helper
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
mkdir -p /etc/qemu
echo "allow all" > /etc/qemu/bridge.conf
chgrp kvm /etc/qemu/bridge.conf
chmod 640 /etc/qemu/bridge.conf
Windows 10 supports the emulated Intel E1000 network card. Configuration like the following configures networking on a bridge named br0 with an emulated E1000 card. MAC addresses that have a 1 in the second least significant bit of the first octet are locally administered (like IPv4 addresses starting with 10. ), see the Wikipedia page about MAC Address for details. The following is an example of network configuration where $ID is an ID number for the virtual machine. So far I haven t come close to 256 VMs on one network so I ve only needed one octet.
NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"
Final KVM Settings
KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
SPICE="-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl"
UEFI="-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"
DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"
NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"
kvm -m 4000 -smp 2 $SPICE $UEFI $DRIVE $NET
Windows Settings The Spice Download page has a link for spice-guest-tools that has the QNX video driver among other things [2]. This seems to be needed for resolutions greater than 800*600. The Virt-Manager Download page has a link for virt-viewer which is the Spice client for Windows systems [3], they have MSI files for both i386 and AMD64 Windows. It s probably a good idea to set display and system to sleep after never (I haven t tested what happens if you don t do that, but there s no benefit in sleeping). Before uploading an image I disabled the pagefile and set the partition to the minimum size so I had less data to upload. Problems Here are some things I haven t solved yet. The aSpice Android client for the Spice protocol fails to connect with the QEMU code at the server giving the following message on stderr: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:../ssl/record/rec_layer_s3.c:1544:SSL alert number 48 . Spice is supposed to support dynamic changes to screen resolution on the VM to match the window size at the client, this doesn t work for me, not even with the Red Hat QNX drivers installed. The Windows Spice client doesn t seem to support TLS, I guess running some sort of proxy for TLS would work but I haven t tried that yet.

14 July 2020

Russell Coker: Debian PPC64EL Emulation

In my post on Debian S390X Emulation [1] I mentioned having problems booting a Debian PPC64EL kernel under QEMU. Giovanni commented that they had PPC64EL working and gave a link to their site with Debian QEMU images for various architectures [2]. I tried their image which worked then tried mine again which also worked it seemed that a recent update in Debian/Unstable fixed the bug that made QEMU not work with the PPC64EL kernel. Here are the instructions on how to do it. First you need to create a filesystem in an an image file with commands like the following:
truncate -s 4g /vmstore/ppc
mkfs.ext4 /vmstore/ppc
mount -o loop /vmstore/ppc /mnt/tmp
Then visit the Debian Netinst page [3] to download the PPC64EL net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2. The package qemu-system-ppc has the program for emulating a PPC64LE system, the qemu-user-static package has the program for emulating PPC64LE for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.
apt install qemu-system-ppc qemu-user-static
update-binfmts --display
# qemu ppc64 needs exec stack to solve "Could not allocate dynamic translator buffer"
# so enable that on SE Linux systems
setsebool -P allow_execstack 1
debootstrap --foreign --arch=ppc64el --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://mirror.internode.on.net/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf
echo ppc64 > /mnt/tmp/etc/hostname
# /usr/bin/awk: error while loading shared libraries: cannot restore segment prot after reloc: Permission denied
# only needed for chroot
setsebool allow_execmod 1
chroot /mnt/tmp apt update
# why aren't they in the default install?
chroot /mnt/tmp apt install perl dialog
chroot /mnt/tmp apt dist-upgrade
chroot /mnt/tmp apt install bash-completion locales man-db openssh-server build-essential systemd-sysv ifupdown vim ca-certificates gnupg
# install kernel last because systemd install rebuilds initrd
chroot /mnt/tmp apt install linux-image-ppc64el
chroot /mnt/tmp dpkg-reconfigure locales
chroot /mnt/tmp passwd
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END
mkdir /mnt/tmp/root/.ssh
chmod 700 /mnt/tmp/root/.ssh
cp ~/.ssh/id_rsa.pub /mnt/tmp/root/.ssh/authorized_keys
chmod 600 /mnt/tmp/root/.ssh/authorized_keys
rm /mnt/tmp/vmlinux* /mnt/tmp/initrd*
mkdir /boot/ppc64
cp /mnt/tmp/boot/[vi]* /boot/ppc64
# clean up
umount /mnt/tmp
umount /mnt/tmp2
# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf
Here is an example script for starting kvm. It can be run by any user that can read /etc/qemu/bridge.conf.
#!/bin/bash
set -e
KERN="kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le"
# single network device, can have multiple
NET="-device e1000,netdev=net0,mac=02:02:00:00:01:04 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper"
# random number generator for fast start of sshd etc
RNG="-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"
# I have lockdown because it does no harm now and is good for future kernels
# I enable SE Linux everywhere
KERNCMD="net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality"
kvm -drive format=raw,file=/vmstore/ppc64,if=virtio $RNG -nographic -m 1024 -smp 2 $KERN -curses -append "$KERNCMD" $NET

9 July 2020

Enrico Zini: Laptop migration

This laptop used to be extra-flat My laptop battery started to explode in slow motion. HP requires 10 business days to repair my laptop under warranty, and I cannot afford that length of downtime. Alternatively, HP quoted me 375 + VAT for on-site repairs, which I tought was very funny. For 376.55 + VAT, which is pretty much exactly the same amount, I bought instead a refurbished ThinkPad X240 with a dual-core I5, 8G of RAM, 250G SSD, and a 1920x1080 IPS display, to use as a spare while my laptop is being repaired. I'd like to thank HP for giving me the opportunity to own a ThinkPad. Since I'm migrating all my system to the spare and then (hopefully) back, I'm documenting what I need to be fully productive on new hardware. Install Debian A basic Debian netinst with no tasks selected is good enough to get going. Note that if wifi worked in Debian Installer, it doesn't mean that it will work in the minimal system it installed. See here for instructions on quickly bringing up wifi on a newly installed minimal system. Copy /home A simple tar of /home is all I needed to copy my data over. A neat way to do it was connecting the two laptops with an ethernet cable, and using netcat:
# On the source
tar -C / -zcf - home   nc -l -p 12345 -N
# On the target
nc 10.0.0.1 12345   tar -C / -zxf -
Since the data travel unencrypted in this way, don't do it over wifi. Install packages I maintain a few simple local metapackages that depend on the packages I usually used. I could just install those and let apt bring in their dependencies. For the build dependencies of the programs I develop, I use mk-build-deps from the devscripts package to create metapackages that make sure they are installed. Here's an extract from debian/control of the metapackage:
Source: enrico
Section: admin
Priority: optional
Maintainer: Enrico Zini <enrico@debian.org>
Build-Depends: debhelper (>= 11)
Standards-Version: 3.7.2.1
Package: enrico
Section: admin
Architecture: all
Depends:
  mc, mmv, moreutils, powertop, syncmaildir, notmuch,
  ncdu, vcsh, ddate, jq, git-annex, eatmydata,
  vdirsyncer, khal, etckeeper, moc, pwgen
Description: Enrico's working environment
Package: enrico-devel
Section: devel
Architecture: all
Depends:
  git, python3-git, git-svn, gitk, ansible, fabric,
  valgrind, kcachegrind, zeal, meld, d-feet, flake8, mypy, ipython3,
  strace, ltrace
Description: Enrico's development environment
Package: enrico-gui
Section: x11
Architecture: all
Depends:
  xclip, gnome-terminal, qalculate-gtk, liferea, gajim,
  mumble, sm, syncthing, virt-manager
Recommends: k3b
Description: Enrico's GUI environment
Package: enrico-sanity
Section: admin
Architecture: all
Conflicts: libapache2-mod-php, libapache2-mod-php5, php5, php5-cgi, php5-fpm, libapache2-mod-php7.0, php7.0, libphp7.0-embed, libphp-embed, libphp5-embed
Description: Enrico's sanity
 Metapackage with a list of packages that I do not want anywhere near my
 system.
System-wide customizations I tend to avoid changing system-wide configuration as much as possible, so copying over /home and installing packages takes care of 99% of my needs. There are a few system-wide tweaks I cannot do without: For postfix, I have a little ansible playbook that takes care of it. Network Manager system connections need to be copied manually: a plain copy and a systemctl restart network-manager are enough. Note that Network Manager will ignore the files unless their owner and permissions are what it expects. Fine tuning Comparing the output of dpkg --get-selections between the old and the new system might highlight packages manually installed in a hurry and not added to the metapackages. Finally, what remains is fixing the sad state of mimetype associations, which seem to associate opening file depending on whatever application was installed last, phases of the moon, and what option is the most annoying. Currently on my system, PDFs are opened in inkscape by xdg-open and in calibre by run-mailcap. Let's see how long it takes to figure this one out.

5 July 2020

Russell Coker: Debian S390X Emulation

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X the latest 64bit version of the IBM mainframe. Here s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way. First you need to create a filesystem in an an image file with commands like the following:
truncate -s 4g /vmstore/s390x
mkfs.ext4 /vmstore/s390x
mount -o loop /vmstore/s390x /mnt/tmp
Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2. The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.
# Install the basic packages you need
apt install qemu-system-misc qemu-user-static debootstrap
# List the support for different binary formats
update-binfmts --display
# qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer"
# so you probably need this on SE Linux systems
setsebool allow_execstack 1
# commands to do the main install
debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage
# set the apt sources
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://YOURLOCALMIRROR/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
# for minimal install do not want recommended packages
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf
# update to latest packages
chroot /mnt/tmp apt update
chroot /mnt/tmp apt dist-upgrade
# install kernel, ssh, and build-essential
chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential
chroot /mnt/tmp dpkg-reconfigure locales
echo s390x > /mnt/tmp/etc/hostname
chroot /mnt/tmp passwd
# copy kernel and initrd
mkdir -p /boot/s390x
cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x
# setup /etc/fstab
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END
# clean up
umount /mnt/tmp
umount /mnt/tmp2
# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf
Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it s right for you and whether you need to alter it slightly for your system. To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I ve included that in the explanation because I think it s important to have all security options enabled. The -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the ccw is the difference). I m not sure if noresume on the kernel command line is required, but it doesn t do any harm. The net.ifnames=0 stops systemd from renaming Ethernet devices. For the virtual networking the ccw again is a difference from other architectures. Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run dhclient eth0 and other similar commands to setup networking and allow ssh logins.
qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper
Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the lockdown=confidentiality kernel security option even though it s not supported in 4.19 kernels, it doesn t do any harm and when I upgrade systems to newer kernels I won t have to remember to add it.
qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper
Try It Out I ve got a S390X system online for a while, ssh root@s390x.coker.com.au with password SELINUX to try it out. PPC64 I ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:
qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"
Above is the minimal qemu command that I m using. Below is the result, it stops after the 4. from 4.19.0-9 . Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.
  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php
Booting from memory...
Linux ppc64le
#1 SMP Debian 4.
The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package. Any suggestions on what I should try next would be appreciated.

16 April 2020

Rigved Rakshit: Intermediate Statistics

These are some of my notes on intermediate statistics from the Udemy Data Science Bootcamp. The Python code associated with this section is available here.

29 March 2020

Paulo Henrique de Lima Santana: My free software activities in February 2020

My free software activities in february 2020 March is ending but I finally wrote my monthly report about activities in Debian and Free Software in general for February. As I already wrote here, I attended to FOSDEM 2020 on February 1st and 2nd in Brussels. It was a amazing experience. After my return to Curitiba, I felt my energies renewed to start new challenges.

MiniDebConf Macei 2020 I continued helping to organize MiniDebConf and I got positive answers from 4Linux and Globo.com and they are sponsorsing the event.

FLISOL 2020 I started to talk with Maristela from IEP - Instituto de Engenharia do Paran and after some messages and I joined a meeting with her and other members of C mara T cnica de Eletr nica, Computa o e Ci ncias de Dados. I explained about FLISOL in Curitiba to them and they agreed to host the event at IEP. I asked to use three spaces: Auditorium for FLISOL talks, Sal o Nobre for meetups from WordPress and PostgreSQL Communities, and the hall for Install Fest. Besides FLISOL, they would like to host other events and meetups from Communities in Curitiba as Python, PHP, and so on. At least one per month. I helped to schedule a PHP Paran Community meetup on March.

New job Since 17th I started to work at Rentcars as Infrastructure Analyst. I m very happy to work there because we use a lot of FLOSS and with nice people. Ubuntu LTS is the approved OS for desktops but I could install Debian on my laptop :-)

Misc I signed pgp keys from friends I met in Brussels and I had my pgp key signed by them. Finally my MR to the DebConf20 website fixing some texts was accepted. I have watched v deos from FOSDEM
  1. Until now, I saw these great talks:
  • Growing Sustainable Contributions Through Ambassador Networks
  • Building Ethical Software Under Capitalism
  • Cognitive biases, blindspots and inclusion
  • Building a thriving community in company-led open source projects
  • Building Community for your Company s OSS Projects
  • The Ethics of Open Source
  • Be The Leader You Need in Open Source
  • The next generation of contributors is not on IRC
  • Open Source Won, but Software Freedom Hasn t Yet
  • Open Source Under Attack
  • Lessons Learned from Cultivating Open Source Projects and Communities
That s all folks!

Next.

Previous.